Meta’s ongoing efforts to safeguard teenagers on Instagram have once again come under the scanner. A new study, prepared by a coalition of advocacy groups, academics from Northeastern University, and former Meta insider Arturo Bejar, claims that the majority of Instagram’s much-publicized teen safety tools are either “broken, ineffective, or no longer available.” The report raises serious concerns over the platform’s commitment to protecting young users from online harm.
Meta, however, has pushed back strongly against these findings, dismissing the study as “misleading” and “dangerously speculative.” The company argues that its teen-focused features and parental control systems are widely used and effective, claiming that the report misrepresents how its protections actually function.
To test the reliability of Instagram’s safeguards, the researchers created fake teenage accounts and examined 47 out of the 53 safety features Meta has publicly introduced. Their results were troubling: only eight features were found to work as promised, while the rest were described as unavailable, ineffective, or overly reliant on teenagers themselves to manage risky interactions.
Difficulty in flagging sexual advances: The study points to the absence of a straightforward system for minors to report unwanted sexual approaches.
Disappearing messages: Researchers argue that this feature removes accountability, leaving teens without proof in case of inappropriate exchanges.
Manual comment hiding: The existing option requires teenagers to actively filter harmful content, placing the burden on them rather than embedding stronger protections in the platform’s design.
The report emphasizes that its focus was not on Instagram’s content moderation policies but rather its product design choices. The way parental controls are applied, how reporting tools function, and whether safety features are intuitive all shape the online experience for teens. According to the researchers, Instagram’s current design “leaves teens exposed” despite the company’s repeated public assurances about improving safety.
Meta has firmly denied the allegations, reiterating its commitment to building safer digital spaces for teenagers. The company accuses the researchers of repeatedly misrepresenting its safety measures. “Teen Accounts lead the industry because they provide automatic safety protections and straightforward parental controls,” company spokesperson Andy Stone told the BBC.
Meta insists that millions of parents and teens actively use these tools every day and that the findings of the report fail to capture the real-world impact of its safety features.
The clash between researchers and Meta highlights the ongoing debate around the safety of young users on social media platforms. While the study claims Instagram’s safety tools fall short of promises, Meta insists its protections are both effective and widely used.
With growing public concern about teen well-being online, the pressure on tech companies like Meta to deliver transparent, reliable, and accountable safety measures is only set to increase. How effectively these features evolve in practice could define trust in social platforms in the coming years.