Meta, the parent company of Instagram, has rolled out several new safety tools to make the platform more secure for teenagers in India. With India being one of Instagram's largest user bases, these updates are aimed at offering a safer digital environment for young users. The new features focus on direct messaging (DM), suspicious account alerts, simplified blocking options, and improved controls for child-run or adult-managed accounts.
To help teenagers make safer decisions while chatting on Instagram, the platform has introduced real-time safety reminders in the direct messaging (DM) section. These reminders will appear even if both users follow each other. When a teen starts a chat, Instagram will prompt them with suggestions to:
Carefully review the profile of the other user
Avoid sharing personal details if the interaction feels uncomfortable
Stay cautious and aware of potential red flags
This proactive approach aims to reduce the chances of teens falling prey to online scams or inappropriate behavior.
Another important safety update is the visible account creation date in chats. Instagram will now show the month and year when the other person’s account was created, directly at the top of the chat window. This information can help teenagers recognize:
New or suspicious accounts that may have been created recently for malicious purposes
Fake profiles often used by scammers or impersonators
By displaying this data upfront, Instagram empowers teens to make informed choices before engaging in conversations.
Meta has also introduced a more streamlined way to protect teenagers from unwanted interactions. Previously, users had to block and report someone through separate steps. Now, Instagram will display a “Block and Report” button in a single tap within chat screens.
This feature allows young users to quickly end unpleasant conversations and notify Instagram of harmful behavior in one go. It ensures a smoother and more immediate response to online abuse or harassment.
Recognizing that many accounts are operated on behalf of children—by parents, guardians, or talent managers—Meta has extended its teen safety tools to adult-managed accounts used by children under 13.
These accounts will now have Instagram’s highest level of protection enabled by default, which includes:
Tighter message restrictions, limiting who can contact the account
Hidden Words filter, which automatically blocks offensive words in comments and messages
Enhanced safety alerts displayed prominently at the top of the Instagram feed
Meta also clarified that while adults are permitted to manage these accounts, any child found using the account independently will be in violation of Instagram’s policy, and the account may be removed.
With India being a leading market for Instagram, the rollout of these features reflects Meta’s commitment to youth safety in one of its fastest-growing regions. These updates aim to:
Protect teenagers from unwanted contact and inappropriate content
Help parents and guardians manage their children’s digital experiences more effectively
Build a more age-appropriate and respectful space on social media
The latest changes are part of a broader push by Meta to address rising concerns around online exploitation, cyberbullying, and digital well-being among young users.