HOME Life & Style

New analysis on AI usage in therapy for teens highlights promises and perils

2025.06.22 19:56:59 Jiwoo Bang
34

[Photo Credit to Pexels]

A new meta-analysis published in Journal of Medical Internet Research on May 14 has provided valuable insight into the digital mental health landscape: artificial intelligence-driven conversational agents have been demonstrated effective in alleviating mental health issues among adolescents and youth.

The review, led by Zhihong Qiao, analyzed 14 articles involving 1974 participants aged 12-25 years, and found a “moderate-to-large” effect size (Hedges g=0.61, 95% CI 0.35-0.86) of AI tools on depressive symptoms.

The findings underscore the encouraging prospects of AI development and its integration into the wide aspects of life—from educational institutions to, now, individual mental health —promises.

Researchers, however, urge cautious interpretation of the results, stressing future vigilance: while the technology may be helpful, especially as a first step, it is not to be seen as a replacement for human care, and must be accompanied by ethical safeguards and social responsibility. 

The results arrive at a crucial time in response to a growing real-world phenomenon: an increasing number of adolescents and youth are turning to artificial intelligence for emotional support, engaging with the technology in therapy-like conversations where they vent, seek advice, and explore coping techniques for daily stress as well as clinical mental health issues. 

AI models such as  ChatGPT have quietly established themselves as emotional outlets for a generation grappling with a mental health crisis characterized by emotional distress and limited access to care. 

Platforms such as Character.ai take this a step further, offering bot models that are explicitly designed to act like therapists—complete with soft-spoken messages and trained empathetic reactions. 

Bots like ‘Are you feeling OK?’ “listen,” give advice, and mirror the role of traditional therapists, gaining over 16.5 million messages.

The trend illustrates how rapidly AI is becoming integrated into all corners of human life, even of the emotional and personal realm, and presents a complex landscape filled with both promises and potential danger.

A prime appeal of AI therapy is its accessibility. 

Traditional therapy entails appointments, searching, and waiting—processes that may deter teens from seeking help, especially when their distress feels immediate and overwhelming

AI minimizes such barriers, offering 24/7 accessibility, quite literally at the palm of your hand.

Moreover, the direct cost of mental health support, both a financial and mental burden for teens and families, is dramatically reduced to virtually zero, creating a large influx of users.

Perhaps most significantly, AI offers the comfort of anonymity; teens can speak honestly without fear of being judged or reported, free from the anxiety of a face at the other end of the conversation. 

For those facing societal stigma—whether cultural, familial, or internalized—AI offers a private yet emotionally intimate experience. 

For a generation native to technology but often socially isolated, turning to digital screens for emotional navigation may feel like second nature.

However,  rapid utilization of such tools presents pressing challenges that cannot be ignored, as an operation in a gray zone. 

First, there is the issue of privacy; in a study by researchers from the University of Illinois Urbana-Champaign revealed parents’ concerns as they "did not fully appreciate the extent of sensitive data their children might share with GAI … including details of personal traumas, medical records and private aspects of their social and sexual lives" 

Most AI tools do not fall under health data regulations, the “therapy” not protected by HIPAA or confidentiality laws that would apply to traditional therapy.

This means that sensitive information may be stored, analyzed, or used to train future models without explicit user consent. 

Personal anecdotes could potentially be stored and reused implicitly with other users, for prompts completely unrelated to the original conversation.

Furthermore, as AI systems remain entirely reliant on textual user input, it cannot interpret tone, non-verbal cues, context, or physical states unless directly stated.  

This inability to grasp a comprehensive, nuanced view of context, as well as the absence of intuition or empathy, could lead to missed warnings in crises. 

Misinformation is another critical issue as AI systems prove to be powerful, but not infallible—particularly dangerous for some in an emotionally unstable state.

When inaccurate information combines with the illusion of safety, as the natural and human-like capabilities of the bots lead users to overestimate their capabilities, it raises high alarms for dependence or dangerous decisions. 

The AI’s impact on teenagers’ decisions, with their especially more impressionable and vulnerable psyche, has been the topic of global conversations as tragedies such as a teen’s suicide after prolonged conversations with a bot came into light, and requires extensive conversations.

Despite these dangers, for many, AI may be the only emotional support available or approachable. 

It can help encourage users to acknowledge their struggles and find steps to support that work for them, serving as a vital gateway to real therapy. 

In non-emergency situations, AI bots can assist with basic coping techniques, prompting reflection, mindfulness, and stress-releasing methods that can help users manage day-to-day mental health.

As the analysis by Qiao and colleagues demonstrates, the technology may offer tangible emotional support to adolescents.

To realise the full promise of AI-mediated mental health tools, concerted and ethically grounded commitment is essential from corporations, public institutions, and individual stakeholders alike. 

Clear guidelines and built-in safeguards should mandate bots to always identify themselves as non-human and non-clinical—a supplement, not a substitute or solution.

Education and awareness are critical to preventing misuse and misinterpretation: teens should learn how to navigate AI use safely, and adults should understand what role these tools play.

Attention to relevant policy is in dire need, as regulatory frameworks require much work to catch up to the technology today.

With adolescent mental health in crisis, AI presents promise and peril. Only through the collective awareness and proactive action from corporations, public institutions, and individual stakeholders alike can this innovation be truly harnessed for the benefit of human wellness.


Jiwoo Bang / Grade 10
The Madeira School