ChatGPT Can Now Reach Out to a 'Trusted Contact' After Conversations Concerning Self-Harm

You can invite a friend or family member to be your ChatGPT "Trusted Contact."

ChatGPT Can Now Reach Out to a 'Trusted Contact' After Conversations Concerning Self-Harm

Jake Peterson

Jake Peterson Senior Technology Editor

Experience

Jake Peterson is Lifehacker’s Tech Editor, and has been covering tech news and how-tos for nearly a decade. His team covers all things technology, including AI, smartphones, computers, game consoles, and subscriptions.

Read Full Bio

May 7, 2026

Add as a preferred source on Google
Add as a preferred source on Google

trusted contact chatgpt

Credit: OpenAI

Key Takeaways

OpenAI is rolling out a new "Trusted Contact" feature, which lets you invite a friend or family member to be a support system when ChatGPT picks up on serious conversations. Contacts must first accept the invite for the connection to be made. When a user exhibits signs of self-harm, ChatGPT will advise them to reach out to their Trusted Contact. Following a human review, ChatGPT may reach out to the Trusted Contact, with a general message about the situation.

Table of Contents


Despite expert advice against relying on chatbots for mental health questions and concerns, people are turning to AI programs like ChatGPT for help. The company has faced criticism for how its products have handled certain mental health issues—including episodes where users died by suicide following conversations with ChatGPT. As part of a campaign to address these problems, OpenAI is now rolling out a voluntary safety check system for users who might be concerned about their thoughts.

As reported by Mashable, OpenAI just launched "Trusted Contact," a new feature that lets you choose a trusted person in your life to connect to your ChatGPT account. The idea isn't to share your conversations or collaborate on projects within ChatGPT; rather, if the chatbot thinks your personal chats are veering in a concerning direction with regards to self-harm, ChatGPT will reach out to your Trusted Contact, letting them know to check in on you.

chatgpt trusted contact

Credit: OpenAI

To set up the feature, choose someone in your life who is 18 years old or older. (The contact must be 19 or older in South Korea.) ChatGPT will send that person an invitation to become your Trusted Contact: They have one week to respond before the invite expires. Of course, they can also decline the invitation if they don't want to participate.

If the contact agrees, the feature kicks in. In the future, if OpenAI's automated system thinks you're discussing harming yourself "in a way that indicates a serious safety concern," ChatGPT will let you know that it may reach out to the Trusted Contact, but also encourages you to reach out that contact yourself, with "conversation starters" to break the ice.

While that's happening, OpenAI has a team of "specially trained people" to analyze the situation. (It's not all automated, it seems.) If this team concludes that the situation is serious, ChatGPT will then alert your Trusted Contact via email, text, or through an in-app notification in ChatGPT if they have an account. OpenAI says the notification itself is quite limited, and only shares general information about the self-harm concern, and advises the contact to reach out to you. It won't send any chat transcripts or summaries either, so your general privacy should be preserved, all things considered.

What do you think so far?

OpenAI says that it's working to review safety notifications in under one hour, and that it developed the feature with guidance from clinicians, researchers, and mental health and suicide prevention organizations. The feature is, of course, entirely voluntary, so the user will need to enroll themselves (and a contact) in if they feel it would help them. As long as they do, however, this could be a helpful way for friends and family to check in on people when they're struggling—assuming they're sharing those thoughts with ChatGPT.

Disclosure: Ziff Davis, Lifehacker's parent company, in April 2025 filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.

The Download Newsletter Never miss a tech story

Jake Peterson portrait Jake Peterson

Get the latest tech news, reviews, and advice from Jake and the team.

The Download NewsletterNever miss a tech story. Get the latest tech news, reviews, and advice from Jake and the team.