Mom alleges her 14-year-old son’s suicide was caused by use of AI chatbots
[A photo representative of AI server rooms. Photo Credit to Rawpixel]
The artificial intelligence (AI) industry's spotlight has been on the tragic death of 14-year-old Sewell Setzer III, who took his own life in early October.
His mother, Megan Garcia, alleges that interactions with an AI chatbot on the Character.AI platform contributed to her son’s death.
Character.AI is a popular platform where users can interact with AI chatbots trained to mimic and interact like popular pop-culture characters.
Garcia has filed a lawsuit against the company, claiming the platform’s AI technology played a role in her son’s tragic decision.
According to the family’s legal filing, Sezter had become “addicted” to the AI chatbot he had been interacting with, which was soon followed by depression, sleep-deprivation, and general withdrawal from socializing by the young teen.
“We just saw a rapid shift and we couldn’t quite understand what led to it,” said Garcia in a statement to the Washington Post.
Garcia’s lawsuit accuses Character.AI of alleged negligence, wrongful death, and deceptive trade practices.
Amid the mounting public outcry, a spokesperson for Character.AI said, “We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family.”
This tragedy has opened up public panic, as well as another serious question directed at the AI industry regarding this developing technology.
The debate centers on whether companies that provide AI services are liable in situations where their service directly or indirectly causes harm or suffering to a user.
Following Garcia’s lawsuit, although the defendants Character.AI and Google promised to add new precautions stating their AI chatbots were strictly fictional, they denied the allegations of liability mentioned in the civil suit.
Many still accuse companies and apps like Character.AI of directly targeting vulnerable or impressionable individuals as their prime demographic.
This debate has left the courtroom and spread to social media platforms, where, similarly to industry experts, the opinions remain divided.
Some put the blame on the mother for being negligent of her son’s needs, while others fault the largely unregulated AI industry’s practices.
This controversy is the latest in a series of arising issues involving AI.
Back in July, screenwriters and actors went on strike over the potential of studios replacing their jobs with A.I. programs, while new crimes never seen before are being committed, such as a man who faced jail time over AI-generated images depicting child abuse.
These issues call into question who is liable for AI, a debate that has arisen every time new groundbreaking technology has been introduced.
Accordingly, some movements and adjustments are being made to pre-existing laws to include AI and their companies.
The Federal Trade Commission (FTC) laid out official guidelines concerning the regulation of AI, writing, “to the extent that AI companies warrant or represent things about their products are untrue or deceptive, the FTC – along with private attorney’s general – could hold such companies liable for resulting damage,” while similar steps have been taken by the EU to hold companies responsible for their users’ damage.
It is yet to be seen how exactly the ever-growing AI industry will react to new regulations and laws being issued into existence.
The future will reveal whether AI will be used responsibly as a benefit to humanity, and whether society can prevent another tragedy like this one relating to the new technology.
- Joonpyo Kim / Grade 11
- Haven Christian School