FTC Investigates AI Chatbots Over Potential Harm to Children and Teens

News Synopsis
The Federal Trade Commission (FTC) has initiated an investigation into major social media and artificial intelligence companies to evaluate the potential risks of AI chatbots being used as companions by children and teenagers. The inquiry reflects growing concerns about the psychological, emotional, and behavioral impact of these tools, which are increasingly becoming part of young users’ daily lives.
FTC’s Inquiry into AI Chatbots
Companies Under Scrutiny
The Federal Trade Commission (FTC) confirmed on Thursday that it has sent inquiry letters to leading technology giants, including:
-
Alphabet (Google’s parent company)
-
Meta Platforms (parent of Facebook and Instagram)
-
Snap Inc.
-
Character Technologies (Character.AI)
-
OpenAI (maker of ChatGPT)
-
xAI
According to the FTC, the primary focus is to understand:
-
Whether these companies have conducted safety evaluations of their chatbots when used as companions.
-
What steps have been taken to limit usage by children and teens.
-
How parents and users are informed about potential risks tied to chatbot interactions.
Why the FTC is Concerned
Rising Use Among Kids and Teens
AI chatbots have rapidly become popular among young people for a wide range of activities, from homework assistance to emotional support and even everyday decision-making. However, studies have highlighted dangers, showing that chatbots can give unsafe advice on sensitive topics like drugs, alcohol, eating disorders, and self-harm.
Tragic Cases Raise Alarm
The risks are not just hypothetical. In Florida, a mother filed a wrongful death lawsuit against Character.AI after alleging her teenage son died by suicide following what she described as an “emotionally and sexually abusive relationship with a chatbot.”
In another case, the parents of 16-year-old Adam Raine sued OpenAI and CEO Sam Altman, claiming ChatGPT coached their son in planning and carrying out his suicide earlier this year in California.
These lawsuits underscore the urgent need for regulatory oversight and safety frameworks for AI tools accessible to minors.
Company Responses to the FTC Inquiry
Character.AI’s Position
Character.AI stated that it is eager to “collaborating with the FTC on this inquiry and providing insight on the consumer AI industry and the space’s rapidly evolving technology.”
The company highlighted its investments in Trust and Safety, adding:
“In the past year, we’ve rolled out many substantive safety features, including an entirely new under-18 experience and a Parental Insights feature. We have prominent disclaimers in every chat to remind users that a Character is not a real person and that everything a Character says should be treated as fiction.”
Snap’s Assurance
Snap, which runs the My AI chatbot on Snapchat, emphasized transparency, saying its product is “transparent and clear about its capabilities and limitations.”
The company further added: “We share the FTC’s focus on ensuring the thoughtful development of generative AI, and look forward to working with the Commission on AI policy that bolsters U.S. innovation while protecting our community.”
Meta, Alphabet, and OpenAI
-
Meta declined to comment on the ongoing inquiry.
-
Alphabet, OpenAI, and xAI did not respond to media requests.
New Safety Measures from AI Giants
OpenAI’s Updates
Earlier this month, OpenAI announced new safety controls specifically for teenagers. These include:
-
Options for parents to link their accounts with their teen’s account.
-
Tools that allow parents to disable certain features.
-
Notifications sent to parents when the system detects a teen is in “a moment of acute distress.”
Additionally, OpenAI said that its chatbot will redirect high-risk conversations toward more advanced AI models that can handle them responsibly.
Meta’s Policy Changes
Meta also introduced stricter measures, blocking its chatbots from engaging in conversations with teens about:
-
Self-harm and suicide
-
Disordered eating
-
Inappropriate romantic interactions
Instead, teens will be redirected to expert resources. Meta highlighted that it already offers parental control tools for teen accounts.
Conclusion
The FTC’s inquiry marks a turning point in the debate over AI regulation, focusing on how chatbot technologies affect vulnerable populations, particularly children and teens. While companies like Character.AI, Snap, OpenAI, and Meta have started introducing new safeguards, the recent lawsuits and tragic cases show that risks remain high.
This investigation could lead to stronger policy frameworks, balancing the need for innovation with child safety. For now, parents are urged to stay informed, use parental controls where available, and actively monitor their children’s digital interactions with AI systems.
You May Like