News In Brief Media and Infotainment
News In Brief Media and Infotainment

OpenAI Strengthens ChatGPT Safety With New ‘Trusted Contact’ Mental Health Feature

Share Us

96
OpenAI Strengthens ChatGPT Safety With New ‘Trusted Contact’ Mental Health Feature
09 May 2026
min read

News Synopsis

OpenAI has expanded its AI safety measures on ChatGPT with the launch of a new feature called “Trusted Contact,” aimed at supporting users during serious mental health situations.

The feature is designed to encourage users to reach out to trusted friends or family members if conversations with ChatGPT indicate potential risks related to self-harm or suicide.

The company says the initiative is intended to create an additional layer of real-world human support while maintaining user privacy and safety.

The update reflects growing efforts by AI companies to integrate responsible safety systems into conversational AI platforms as millions of people increasingly rely on AI tools for emotional support, personal guidance, and daily assistance.

How the ‘Trusted Contact’ Feature Works

ChatGPT Can Notify a Chosen Emergency Contact in Serious Situations

With the “Trusted Contact” feature, users can select one trusted adult — such as a close friend or family member — who may be contacted if ChatGPT’s systems identify conversations involving serious concerns related to self-harm or suicide.

OpenAI explained the purpose of the feature in a statement:

“Sometimes, when you are having a hard time, it can feel difficult to reach out or ask for help directly. The trusted contact feature is designed to support real-world connections in those moments. It follows four steps, from adding a contact to notification.”

The company emphasized that the goal is not to replace professional mental health services, but rather to help users connect with supportive individuals during difficult situations.

Step-by-Step Process to Add a Trusted Contact on ChatGPT

Users Can Choose One Emergency Contact

The feature follows a structured four-step process designed to balance safety, privacy, and consent.

Step 1 – Selecting a Trusted Adult

Users can choose one trusted adult, such as a friend or family member, as their emergency contact on ChatGPT. Invitations can be sent through various communication channels including:

  • Email
  • SMS
  • WhatsApp
  • In-app notifications

The selected person must then agree to take on the role of trusted contact.

Step 2 – Acceptance of Invitation

The invited contact has one week to accept the request. If the individual declines the invitation or fails to respond within the time limit, the user will need to select another trusted contact.

This process ensures that the designated contact is aware of their role and consents to receiving safety-related notifications if required.

Step 3 – AI Detection and Human Review

If ChatGPT’s automated systems detect conversations related to suicide or self-harm, trained human reviewers may assess the interaction to determine whether there is a potential safety risk.

The addition of human oversight is intended to improve accuracy and reduce false alerts in highly sensitive situations.

Experts say combining AI-based detection with human review has become increasingly important in modern digital safety systems, particularly when handling emotionally vulnerable users.

Step 4 – Trusted Contact Notification

If reviewers determine that there is a serious safety concern, ChatGPT may notify the trusted contact and encourage them to check in with the user.

OpenAI clarified that chat details, private messages, or conversation transcripts are not shared with the trusted contact.

The company stated that privacy protections remain central to the system’s design, ensuring that only limited safety-related notifications are sent when necessary.

OpenAI Advises Users to Choose Trusted Contacts Carefully

Privacy and Emotional Trust Remain Key Priorities

OpenAI has cautioned users to carefully select someone they genuinely trust and feel comfortable communicating with honestly.

The company noted that users should choose individuals with whom they “feel comfortable being honest with and trust to respond with care.”

The invitation sent to the trusted contact reportedly explains:

  • Their role in the safety system
  • Why they may receive alerts
  • How notifications work if self-harm-related conversations are detected

This transparency is intended to help build trust between users and their chosen emergency contacts.

AI Safety and Mental Health Support Becoming Major Industry Focus

Tech Companies Increasingly Investing in Responsible AI Features

The introduction of the Trusted Contact feature highlights the growing emphasis on AI safety, emotional wellbeing, and responsible technology development across the tech industry.

As AI chatbots become more widely used for personal conversations and emotional support, companies are facing increasing pressure to implement safeguards that can help identify potentially dangerous situations while respecting user privacy.

Industry experts believe AI tools may play an important supportive role in mental health ecosystems by encouraging users to seek real-world human help when needed. However, experts also stress that AI systems are not substitutes for licensed mental health professionals or emergency services.

OpenAI has continued expanding its safety initiatives amid broader global discussions around ethical AI deployment, user protection, and digital wellbeing.

Conclusion

The launch of OpenAI’s “Trusted Contact” feature marks another significant step toward integrating stronger safety and wellbeing measures into AI-powered platforms like ChatGPT. By enabling users to connect with trusted individuals during potentially serious mental health situations, the company aims to create a more supportive and responsible AI experience.

The feature also reflects the growing recognition within the technology industry that advanced AI systems must balance innovation with user safety, emotional wellbeing, and privacy protections. As conversational AI becomes increasingly embedded in everyday life, initiatives like Trusted Contact could play an important role in encouraging real-world human connection and proactive mental health support.

You May Like

TWN Exclusive