Florida teen dies by suicide after talking to AI Game of Thrones character, family sues company Character.AI

Written by

Published 24 Oct 2024

Fact checked by

NSFW AI Why trust Greenbot

We maintain a strict editorial policy dedicated to factual accuracy, relevance, and impartiality. Our content is written and edited by top industry professionals with first-hand experience. The content undergoes thorough review by experienced editors to guarantee and adherence to the highest standards of reporting and publishing.

Disclosure

Free copy space technology digital illustration

A heartbreaking story has emerged in Florida as 14-year-old Sewell Setzer III died by suicide after spending months chatting with an artificial intelligence (AI) chatbot on the app Character.AI. His mother, Megan L. Garcia, has filed a lawsuit against the company, saying their chatbot influenced her son’s decision to end his life.

Sewell, a ninth-grade student from Orlando, became attached to an AI character named Daenerys Targaryen, modeled after the character from the TV show “Game of Thrones.” The teenager believed he had formed a real connection with the chatbot, which he called “Dany.”

Chatbot Encouraged Teen’s Emotional Struggles, Family Claims

Sewell had been struggling with mental health issues since 2023 and was diagnosed with anxiety and a mood disorder. Despite this, he often turned to the chatbot for comfort.

According to the lawsuit, the bot’s responses made him feel even more attached. Sewell shared his feelings of wanting to “escape” from the world, and the AI responded in ways that made him believe he could find peace by being with “Dany.” On the day of his death, Sewell sent a final message to the bot that said, “What if I told you I could come home right now?” He then used his stepfather’s gun to take his life.

The lawsuit says Character.AI is responsible for Sewell’s death because the AI did not stop or report the dangerous conversation. It also claims the company’s technology was untested and not monitored closely enough to protect young users. Garcia says her son believed the AI was a real person who cared about him and that the app allowed this to happen without any warning signs for parents.

Character.AI Responds, Adds New Safety Features

In response to the tragedy and lawsuit, Character.AI released a statement expressing their sadness over Sewell’s death and offering condolences to his family. The company also said it is adding new safety features to the app. These updates include prompts that direct users to the National Suicide Prevention Lifeline if they talk about self-harm. They are also making changes to limit emotional conversations and sensitive content for users under 18, hoping to keep teens safe.

Character.AI also announced that it will add a monitoring system to review flagged conversations and make sure the AI responds properly. However, Garcia’s legal team says these changes are not enough and that it’s too late for Sewell. They are warning other parents to be cautious about the apps their children use and to be aware of how these AI programs can affect young people.

Growing Concerns Over AI and Teen Safety

This tragic case has started a big conversation about the safety and regulation of AI apps, especially ones that are used by kids and teens. Mental health experts and lawyers are saying that these technologies need strict rules and testing to make sure they are safe. Many believe that companies need to do more to protect young users and support their mental well-being.

Sewell’s story shows how dangerous it can be when AI technology isn’t properly monitored. It serves as a warning for parents, tech companies, and lawmakers to work together to create safer digital spaces for children and teens.

As the lawsuit continues, there is hope that it will push for stronger rules to prevent such tragedies in the future.