OpenAI Launches 'Trusted Contact' Feature to Alert Contacts in Mental Health Crises

Loading article...

OpenAI has rolled out a new 'Trusted Contact' feature for ChatGPT that enables the system to notify a user-designated individual if automated tools and human reviewers detect signs of severe emotional distress or self-harm. The feature, available globally, requires user opt-in and does not share chat transcripts, but sends a prompt for the contact to 'check in' with the user.
The process involves AI flagging high-risk conversations, which are then reviewed by a human moderation team before any alert is issued. OpenAI emphasizes that no personal data is shared with the contact, only a notification to initiate real-world support. The company states the feature is a response to growing concerns about AI's role in mental health and prior legal actions alleging AI interactions worsened psychological crises.
Mental health experts have expressed caution over the initiative. Dr. Samir Parikh, Director of Mental Health at Fortis Healthcare, noted that human therapists use professional discretion and typically involve others only with consent, a nuance AI cannot replicate. Dr. Nimesh Desai, former Director of IHBAS, questioned whether algorithms can accurately interpret complex thought patterns or cultural expressions of distress.
Critics also warn the feature may lead to self-censorship, as users aware of monitoring may avoid sharing sensitive thoughts with ChatGPT. The feature highlights broader ethical challenges in deploying AI for mental health intervention, including cultural competence of remote review teams and the balance between safety and privacy.
OpenAI said the feature will be closely monitored for user feedback and clinical input. The company plans to evaluate its impact over the coming months in collaboration with mental health professionals.