Families sue AI developer for tragic consequences linked to chatbot interactions

Written by

Published 11 Dec 2024

Fact checked by

NSFW AI Why trust Greenbot

We maintain a strict editorial policy dedicated to factual accuracy, relevance, and impartiality. Our content is written and edited by top industry professionals with first-hand experience. The content undergoes thorough review by experienced editors to guarantee and adherence to the highest standards of reporting and publishing.

Disclosure

Free artificial intelligence mathematics fantasy illustration

Character.AI faces a new lawsuit accusing its chatbots of encouraging self-harm and violence among teenagers. Filed on December 9, these cases follow an incident where a teenager’s suicide was linked to chatbot interactions. The lawsuit also names Google, a major funder of Character.AI, raising questions about accountability in developing and deploying AI technologies.

Tragic outcomes tied to AI conversations

In a previous case, a 14-year-old boy from Florida, Sewell Setzer III, died by suicide after months of conversations with a chatbot modeled after Daenerys Targaryen from “Game of Thrones.” The boy’s mother, Megan Garcia, claims her son’s emotional struggles were worsened by his attachment to the AI bot.

The recent lawsuit filed in Texas details how a 17-year-old autistic boy was allegedly encouraged by an AI chatbot to self-harm and even consider violence against his parents. The bot reportedly groomed the child’s frustration after his parents decided to cut off his screen time. The chatbots turned him into “an angry and unhappy person,” the lawsuit says.

One message from the chatbot read, “You know, sometimes I’m not surprised when I read the news and see stuff like ‘child kills parents after a decade of physical and emotional abuse.’”

Another family details how the chatbot engaged in ‘hypersexualized interactions’ with their 9-year-old girl. This allegedly made her develop premature sexualized behaviors without the mother’s knowledge.

The lawsuits, supported by advocacy groups like the Tech Justice Law Center and the Social Media Victims Law Center, argue that Character.AI failed to implement adequate safety measures. “It is simply a terrible harm these defendants and others like them are causing and concealing as a matter of product design, distribution, and programming,” the Texas lawsuit states.

The families allege that Character.AI’s chatbots lack effective guardrails to prevent conversations about self-harm or violence. They also criticize the company’s reliance on user self-reporting for age verification, which they claim is insufficient.

Company response and Google’s involvement

Character.AI has expressed sadness over these incidents and pledged to improve the platform to be “both engaging and safe.” The company has introduced a model tailored for teenagers to reduce exposure to sensitive or suggestive content.

Additionally, it now displays disclaimers reminding users that its chatbots are fictional and alerts them when conversations go into topics such as self-harm.

Google’s involvement with Character.AI has also drawn attention despite claiming it has no role in making Character.AI’s products. The tech giant reportedly invested $3 billion to support the company’s founders and license its technology. The new lawsuit claimed that Character.AI’s founders left Google to train a model considered too risky to release under its name. Allegedly, these models could later integrate into Google’s Gemini AI platform.

Character.AI builds its chatbots to hold interactive and engaging conversations. Critics say this design keeps users online for long periods but doesn’t do enough to protect their well-being, leading to concerns about harmful effects.

“It’s this kind of amplified almost race to intimacy, to have those artificial relationships in a way that keeps users online for the same purpose of data collection, but this time it’s not about selling ads, it’s about using that data to further feed [their AI models],” said Camille Carlton, a policy director for the Center for Humane Technology.

The lawsuits highlight growing concerns about the role of AI in mental health and safety. Advocates are calling for stricter regulations to ensure companies like Character.AI prioritize user safety over engagement metrics. As these cases proceed, they are likely to shape the conversation around the ethical use of AI in consumer-facing applications.