AI algorithms drive what people see as truth
[A boy addicted to social media. Photo Credit to Pixahive]
In April 2018, a major study conducted by MIT and published in Science found that false news on Twitter spreads between 10 to 20 times faster than factual information.
In today’s digital environment, where algorithms dominate human attention, artificial intelligence has silently reshaped daily life and not always for the better.
Social media platforms, originally built to share happiness and joy without direct interaction with people , have transformed into echo chambers.
These online spaces often recycle misinformation, magnify hostile comments, and create conditions where users become trapped in endless doom-scrolling.
Even more troubling is that humans, not automated bots, are the primary drivers of misinformation.
Lies tend to be more sensational, emotional, and surprising, qualities that algorithms are programmed to prioritize.
By contrast, the truth, often mundane or complex, struggles to compete in the attention economy.
This dynamic creates an information system optimized for engagement rather than accuracy.
Artificial intelligence does not comprehend truth; it recognizes clicks, shares, and viewing time.
When a shocking falsehood captures attention longer than a factual statement, algorithms reward and amplify it.
The consequence is that falsehoods spread faster, reach wider audiences, and embed themselves deeper in public discourse.
Psychologists warn that this cycle contributes to rising mental health challenges.
AI-driven feeds present continuous streams of alarming content such as bad news, viral outrage, and political disputes.
Research indicates that frequent doom-scrolling can lead to chronic stress, depression, and emotional exhaustion, particularly among young people most vulnerable to algorithmic reinforcement.
Despite acknowledging all of this, misinformation continues to circulate.
From vaccine conspiracy theories to election denial, algorithm-curated content often strengthens the most divisive voices.
The platforms do not promote what is accurate; they promote what maximizes user engagement.
A University of Southern California study revealed that frequent social media users share false stories up to six times more than less active users, often without reading past the headline.
Many users share posts for entertainment, popularity, or influence, rather than out of genuine belief in the content.
The impact is not limited to the United States.
Rising distrust in institutions, growing tribalism, and growing doubt about reality itself are now widespread.
Yet, experts emphasize that these problems are not irreversible.
Proposed solutions include introducing friction into the sharing process.
Platforms could implement content warnings, prompts requiring users to read before sharing, and slower distribution of unverified material.
Some companies are experimenting with AI that detects and reduces the visibility of false information rather than boosting it.
TikTok, for example, recently began labeling AI-generated videos to limit deception.
Education is equally critical.
Younger generations must be taught how to recognize manipulated media, verify sources, and identify emotional habits.
Media literacy programs in schools and workplaces could help individuals navigate AI-shaped content more responsibly.
Ultimately, artificial intelligence reflects the values and goals of those who design it.
If society prioritizes long-term trust over short-term engagement, the same technology currently causing harm could instead promote truth, empathy, and public understanding.
Social media technology was originally created to connect and inform.
Experts argue it can still be redirected toward those purposes for the future

- Aiden Park / Grade 10 Session 9
- The Stony Brook School