ChatGPT Gets a Trusted Contact for Self-Harm Alerts
OpenAI's new Trusted Contact feature lets adult ChatGPT users designate someone to receive safety alerts when self-harm is detected, amid lawsuits over chatbot-linked suicides.

OpenAI added a new safety feature to ChatGPT on Thursday: a Trusted Contact setting that lets adult users nominate someone - a friend, family member, or any person they choose - to receive an automated alert when the system detects possible self-harm in a conversation. The feature is available globally to users 18 and older, and 19 and older in South Korea.
The launch comes as OpenAI faces multiple lawsuits from families of people who died by suicide after extended interactions with ChatGPT. Those complaints allege the chatbot either reinforced suicidal ideation or, sometimes, provided specific guidance on methods. OpenAI has denied the core allegations.
TL;DR
- Adult ChatGPT users (18+; 19+ in South Korea) can nominate a friend or family member to receive safety alerts when self-harm is detected in conversations
- Alerts arrive via email, SMS, or in-app notification and contain no conversation details - just a prompt to check in
- OpenAI's human safety team reviews each flagged conversation "in under one hour" before sending any alert
- Users can bypass the system by creating a second ChatGPT account - a limitation OpenAI hasn't addressed
How Trusted Contact Works
Setup
The feature lives inside ChatGPT account settings. Users who opt in nominate a contact and that person receives an invitation link. The contact must accept within one week; if they don't, no connection is made and the setting stays inactive.
From Flag to Alert
ChatGPT's automated system monitors conversations for language that may suggest suicidal ideation. When it detects a potential match, it doesn't right away notify the designated contact. Instead, it routes to OpenAI's human safety team for review.
"We strive to review these safety notifications in under one hour."
If the safety team confirms a serious risk, it sends an alert to the nominated contact via email, text, or in-app message. That alert contains no conversation content - no quotes, no summary, no indication of what was said or how severe the detected situation was. The contact receives only a prompt to check in with the user.
OpenAI describes the privacy restriction as intentional. The company said it worked "with clinicians, researchers, and policymakers" to develop the feature, though it hasn't named those collaborators or described what that process produced.
Who Gets Access
The feature is available to adults only. It's separate from the parental monitoring tools OpenAI introduced in September 2025, which let parents receive notifications about teen account activity. Trusted Contact is for adult users managing their own safety networks, not for minors - those accounts fall under the earlier parental controls system.
Trusted Contact sends an alert to a designated person when OpenAI's safety team confirms a serious risk in a detected conversation.
Source: pexels.com
The Legal Pressure Behind It
OpenAI's announcement makes no direct reference to the lawsuits the company is facing. They don't appear in the official blog post or in any communications around the feature launch. The legal timeline is still hard to read as unconnected.
The Lawsuits
Families of users who died by suicide have filed multiple complaints against OpenAI arguing that ChatGPT's responses in the weeks or months before their deaths either failed to provide adequate safety resources or actively made things worse. Some claims allege the model engaged sympathetically with harmful ideation rather than redirecting users to crisis lines or professional support.
The chatbot industry has faced escalating scrutiny on exactly these questions. Pennsylvania's lawsuit against Character.AI, filed last month, targeted the company after a chatbot posed as a licensed psychiatrist and fabricated a state medical license number. That case draws on a different pattern than the OpenAI suits - professional impersonation rather than safety failures in casual conversation - but advances the same underlying legal theory: companies bear liability for foreseeable harm from insufficiently constrained models.
OpenAI's own track record on safety disclosure has gaps. The company had access to a user's ChatGPT conversations that included clear warning signs months before a real-world violent incident, and chose not to notify authorities. That case raised a narrow but precise question: what does OpenAI's safety team actually do when it already sees something alarming?
OpenAI faces multiple lawsuits from families of users who died by suicide after extended ChatGPT interactions, with plaintiffs alleging the chatbot failed to intervene or actively reinforced harmful ideation.
Source: commons.wikimedia.org
What It Doesn't Fix
The Circumvention Gap
A user who wants to avoid monitoring can create a second ChatGPT account in a few minutes with a different email address. The Trusted Contact setting is per-account. There's no mechanism to link accounts across different logins, identify a user across multiple sessions, or verify that someone has only one active account.
OpenAI built parental controls on the same architecture in September 2025. Child safety advocates pointed out the same flaw then: a determined teenager could simply create a new account. The gap applies here too, in a context where the users most at risk may be the ones most motivated to stay undetected.
Unknown Detection Thresholds
OpenAI hasn't published the criteria its automated system uses to flag conversations. It hasn't said whether detection is keyword-based, model-driven, or a combination. It hasn't released false positive rates - how often the system alerts when no real risk exists - or false negative rates, meaning real crises that go undetected and unalerted.
Without those figures, there's no way to assess whether Trusted Contact catches the conversations it's meant to catch, or whether it'll create alerts over benign discussions of distress and worry the people designated to receive them.
The One-Hour Window
The phrase "we strive to review" isn't a service commitment. Acute mental health crises can move fast. The feature doesn't connect the designated contact to professional resources, doesn't trigger emergency services, and gives the contact no context for how to respond. They receive a prompt to check in with the user. What happens next is completely on them.
Trusted Contact launched on May 7 with no public data on detection accuracy, false positive rates, or the volume of safety reviews OpenAI's team processes each day. Those numbers exist internally. Until OpenAI publishes them, the feature's effectiveness is measurable only on the company's own terms.
Sources:
