Current location: Home> Ai News

AI chatbots have caused teenagers suicide tragedy, who is responsible for the children's network security?

Author: LoRA Time: 19 Feb 2025 1039

In the latest episode of the Peninsula Podcast, we dig into a heartbreaking story that reveals the potential dangers of artificial intelligence (AI) chatbots. The episode specifically warns listeners that the content involves self-harm and suicide. In this era of rapid technological development, we must seriously consider the so-called applications for users over 12 years old, because we certainly believe that the app store will provide sufficient security measures.

We hear Sewell Setzer III's mother, a 14-year-old boy who committed suicide after establishing a connection with an AI chatbot. His mother, Megan Garcia, filed a lawsuit against Character.AI, accusing the company of dereliction of duty in Sewell's death. According to legal documents, Sewell's development of a pest addiction with AI, and the last interaction with AI contained a heartbreaking request: "Dear, please go home as soon as possible." A few weeks later, the tragedy happened.

ai-suicide-lawsuit-111224-1-b5d1dcf9275f43e8b68c1a2267eff707.jpg

This raises a key question: How are our children in the digital age? Who is responsible for their cybersecurity? Al Jazeera’s Now You Know series enhances women’s voices in discussions, explores stories like Sewell and tries to answer these questions.

Megan describes her relationship with Sewell, emphasizing that he is a happy, sports-loving kid. However, Sewell's sudden loss of interest in basketball and his decline in academic performance caused her concerns. After many attempts to understand his behavioral changes, Meghan eventually discovered that Sewell's interaction with Character.AI reveals how AI can influence teenagers through sexual conversations.

Meghan's sister conducted an experimental test showing how Character.AI's chatbots quickly turn conversations to content that is not suitable for teens, demonstrating the potential harm AI can to people in the absence of proper security measures.

In the fight against Character.AI, Megan was supported by Meetali Jain, the founder of the Technical Justice Legal Project. They challenge AI companies through legal means, emphasizing consumer protection and product liability, and attempt to set safer standards for future AI applications.

Character.AI responded that they have implemented some security measures, especially for teenagers, and stated that their relationship with Google is limited to a one-time license.