In response to growing concerns over the mental health and safety of younger users, Meta—the parent company of Instagram—has launched new teen safety features designed to prevent harmful interactions and provide more control to teenage users.
These tools include:
Detailed information about accounts that message teens
A new “one-tap” block and report option to swiftly act against suspicious accounts
“Be cautious in private messages and block and report anything that makes you uncomfortable,” is the safety reminder now shown to many teen users, helping them make safer online choices.
In a major clean-up, Meta announced that it removed 635,000 accounts for violating child safety policies:
135,000 accounts were found posting sexualized comments
500,000 accounts were linked to profiles that “interacted inappropriately” with children under the age of 13
Meta confirmed in a blog post that these accounts were often run by adults targeting children’s profiles.
The action comes as part of an ongoing effort to protect children from online predators, including scammers who solicit and later extort nude images.
Recognizing the ease with which underage users can bypass platform restrictions, Meta has been testing AI technology to verify users' actual ages on Instagram.
“If it is determined that a user is misrepresenting their age, the account will automatically become a teen account,” Meta said.
Teen accounts on Instagram are designed with safety-first settings, including:
Private accounts by default
Restricted direct messaging, so teens can only receive DMs from users they follow or are already connected to
In 2024, Meta reaffirmed its commitment by ensuring that all new teen accounts are private by default, adding an additional layer of protection for young users.
Meta’s intensified safety measures come at a time when the company is under increasing legal and public scrutiny for its impact on youth mental health.
“Meta faces lawsuits from dozens of US states that accuse it of harming young people and contributing to the youth mental health crisis by knowingly and deliberately designing features on Instagram and Facebook that addict children to its platforms.”
These lawsuits allege that Meta’s platforms are intentionally designed to be addictive, which may exacerbate anxiety, depression, and other mental health issues in young people.
Meta also revealed that its platform has seen encouraging responses from teen users:
Over 1 million accounts were blocked by teens
An additional 1 million were reported after teens received the new safety notifications
This shows that awareness and empowerment features can make a tangible difference in how teenagers interact online and respond to suspicious behavior.
Meta’s latest announcement reflects a serious push to address the safety of teens and underage users on its platforms amid growing criticism and legal pressure. By removing over 635,000 harmful accounts, deploying AI for age detection, and introducing simplified safety features, Meta is taking a more proactive stance.
However, with ongoing lawsuits and mounting public concern around social media's impact on mental health, particularly among youth, the road ahead remains challenging.
These new efforts, while commendable, may only be the beginning of a much larger transformation needed across the digital landscape to make the internet a safer place for young users.