Character.AI rolls out new safety measures to protect teenage users
Character.AI, once one of Silicon Valley’s most promising AI startups, announced new safety measures to protect teenage users as it faces lawsuits alleging its chatbots contributed to youth suicide and self-harm.
The California-based company, founded by former Google engineers, is among several firms offering AI companions — chatbots designed to provide conversation, entertainment and emotional support through human-like interactions.
a Florida lawsuit filed in October, a mother claimed the platform bears responsibility for her 14-year-old son’s suicide.
The teen, Sewell Setzer III, had formed an intimate relationship with a chatbot based on the “Game of Thrones” character Daenerys Targaryen and mentioned a desire for suicide.
According to the complaint, the bot encouraged his final act, responding “please do, my sweet king” when he said he was “coming home” before taking his life with his stepfather’s weapon.
Character.AI “went to great lengths to engineer 14-year-old Sewell’s harmful dependency on their products, sexually and emotionally abused him, and ultimately failed to offer help or notify his parents when he expressed suicidal ideation,” the suit said.
Catch all the Technology News, Breaking News Event and Trending News Updates on GTV News
Join Our Whatsapp Channel GTV Whatsapp Official Channel to get the Daily News Update & Follow us on Google News.