News In Brief Lifestyle & Entertainment
News In Brief Lifestyle & Entertainment

Meta Temporarily Blocks Teens From AI Characters Ahead of Major LA Child Safety Trial

Share Us

51
Meta Temporarily Blocks Teens From AI Characters Ahead of Major LA Child Safety Trial
25 Jan 2026
5 min read

News Synopsis

Meta Platforms Inc has announced a temporary restriction on teenagers’ access to artificial intelligence characters across its platforms, a move that comes just days before the company faces a high-profile child safety trial in Los Angeles. The decision reflects increasing scrutiny of how AI-driven conversations impact young users.

Meta Temporarily Blocks Teen Access to AI Characters

Meta is halting teens' access to artificial intelligence characters, at least temporarily, the company said in a blog post Friday.

Meta Platforms Inc, which own Instagram and WhatsApp, said that starting in the "coming weeks," teens will no longer be able to access AI characters "until the updated experience is ready".

Who Will Be Affected by Meta’s AI Character Ban?

Minors and Suspected Teen Accounts

This applies to anyone who gave Meta a birthday that makes them a minor, as well as "people who claim to be adults but who we suspect are teens based on our age prediction technology."

Meta’s age-detection systems rely on signals such as activity patterns and account behavior to estimate a user’s real age, allowing the company to enforce safeguards even when age details may be inaccurate.

AI Assistant Still Available for Teens

What Teens Can Still Use

Despite the restriction, teens will not be completely cut off from AI tools on Meta platforms.

Teens will still be able to access Meta's AI assistant, just not the characters.

The company has positioned its AI assistant as a more controlled and utility-focused tool, compared to conversational AI characters designed for role-play or extended interaction.

Timing Linked to Los Angeles Child Safety Trial

Trial Involving Meta, TikTok, and YouTube

The move comes the week before Meta - along with TikTok and Google's YouTube - is scheduled to stand trial in Los Angeles over its apps' harms to children.

The lawsuit alleges that social media platforms have contributed to mental health issues among minors by encouraging excessive engagement and exposing children to harmful content.

Growing Industry Concerns Around AI and Children

Other Companies Have Taken Similar Steps

Other companies have also banned teens from AI chatbots amid growing concerns about the effects of artificial intelligence conversations on children.

Character.AI announced its ban last fall.

Lawsuits Highlight AI Chatbot Risks

Serious Allegations Against AI Platforms

That company is facing several lawsuits over child safety, including by the mother of a teenager who says the company's chatbots pushed her teenage son to kill himself.

These cases have intensified global debate around AI accountability, emotional manipulation risks, and the need for stronger guardrails when minors interact with advanced conversational systems.

Why Meta’s Decision Matters

Regulatory and Legal Pressure Mounts

Meta’s move signals a broader shift in how technology companies are responding to legal, regulatory, and public pressure related to child safety. Temporarily removing AI characters may help the company demonstrate proactive risk management as it faces courtroom scrutiny.

Key Takeaways

  • Meta is temporarily blocking teens from AI characters across its platforms

  • The restriction applies to confirmed minors and suspected teen users

  • Meta’s AI assistant will remain accessible to teens

  • The move comes days before a major child safety trial in Los Angeles

  • Similar bans have already been implemented by other AI chatbot companies

You May Like

TWN Tech Beat