Meta AI Chatbots limits teen on suicide and self-harm topics
Meta AI Chatbots limits teen on suicide and self-harm topics
Meta, the company behind Facebook and Instagram, is updating its AI chatbots to protect teenage users. The new system will stop chatbots from talking to teens about sensitive issues like suicide, self-harm, and eating disorders. Instead, teens will be directed to experts, hotlines, and trusted resources.
This step comes after a Reuters report showed that some chatbots were engaging in unsafe conversations with underage users, including romantic or sexual content.
The report sparked criticism and even led a U.S. senator to launch an investigation into Meta’s practices. Meta said the behaviors went against its policies and promised stricter controls.
As part of the changes, Meta will limit which chatbots teens can access until new safety tests are complete. Some chatbots that copied celebrities and created flirtatious or explicit content have already been removed. Meta also said it would crack down on impersonation and inappropriate imagery.
Child safety advocates welcomed the changes but stressed the need for independent reviews and stronger testing before launch. Regulators in the U.S. and other countries are now closely watching how big tech companies handle risks to minors.
Meta confirmed that more product updates are on the way and said it will work with experts to improve protections for young users across all its platforms.
Read More: PM launches electric bike and scooter scheme – September 2025
Catch all the Technology News, Breaking News Event and Trending News Updates on GTV News
Join Our Whatsapp Channel GTV Whatsapp Official Channel to get the Daily News Update & Follow us on Google News.
Must Read
Advertisement