Google Is Changing How Gemini Handles a User’s Mental Health Crisis

People are getting very attached to their chatbots, after all.

Google Is Changing How Gemini Handles a User’s Mental Health Crisis

Jake Peterson

Jake Peterson Senior Technology Editor

Experience

Jake Peterson is Lifehacker’s Tech Editor, and has been covering tech news and how-tos for nearly a decade. His team covers all things technology, including AI, smartphones, computers, game consoles, and subscriptions.

Read Full Bio

April 7, 2026

Add as a preferred source on Google
Add as a preferred source on Google

gemini app icon

Credit: miss.cabul/Shutterstock

Key Takeaways

Google announced a number of changes it is making to how Gemini deals with its users' mental health. Gemini will soon offer in-chat resources for calling and texting crisis hotlines. The company is also changing how Gemini responds to "acute mental health situations." Google touted the work it has done to protecting younger users while using Gemini.

Table of Contents


When companies like OpenAI and Google started rolling out generative AI models to the general public, I doubt they predicted how attached people would get to the technology—and the effect it would have on their collective mental health. Some ChatGPT users legitimately mourned when OpenAI shutdown its GPT-4o model, as they treated that specific model like a companion. Others have taken darker paths with their chatbots, resulting in lawsuit against AI companies whose technology allegedly advised and encouraged suicidal thoughts. This situation puts a lot of pressure on these companies, as it should: Generative AI is hugely influential right now, and there's a lot of responsibility on the developers of that tech.

It's under that backdrop where we find Google's latest updates to Gemini. In a Tuesday morning press release, the company strayed away from fun new features or ability for its flagship AI; instead, Google's latest updates are focused on mental health, and how Gemini impacts the emotions and moods of the people who use it. Specifically, Google has three key points it says its implementing to improve how Gemini handles these tough situations.

How Gemini will offer users crisis support

Google says it is updated to Gemini to "streamline the path to support for those who need it." The company says that when the AI detects that a user might need mental health details during a chat, Gemini will present a new "Help is available" module, which can point users towards information and care. Google says that it worked with clinical experts on this in-chat module.

On the flip side, if Gemini thinks that a user is at risk of self-harm or suicide, it will present a "one-touch" interface to connect that user immediately to a crisis hotline. Users will be able to call or text the hotline, or visit its website, directly from their Gemini chat. Even if the conversation moves on, Gemini will keep these resources available for users should they need them.

Google says it is pledging $30 million in global funding over the next three years to assist crisis hotlines. The company is also expanding its relationship with ReflexAI, including $4 million in funding.

What do you think so far?

Gemini is changing how it responds to "acute mental health situations"

Google says its clinical, engineering, and safety teams are currently focused on improving how Gemini responds to these difficult situations. Specifically, there are three areas of focus:

Safety and human connection: Google wants to connect users to real humans, not AI chatbots, in times of crisis.

Improved responses: AI responses should encourage users to seek help, and not validate harmful behaviors or self-harm.

Avoiding confirming false beliefs: Google says it trained Gemini not to reinforce false beliefs, and "gently" differentiate between subjective and objective realities. This point is particularly important, as previous generative AI models (notably GPT-4o) were all too ready to confirm delusional thoughts from users.

What Google says it is doing with Gemini to protect younger users

By far, the most important discussion here surrounds minors and their interactions with AI. For its part, Google is touting what it has done with Gemini to protect younger users, including:

"Persona protections" supposedly stops Gemini from acting like a companion when interacting with minors.

There are designs to block Gemini from connecting too deeply with younger users, to prevent developing an emotional dependance.

Gemini will avoid encouraging both bullying and harassment.

While user safety is important across the board, it's especially important for young people, who are quite literally growing up with the tech. These announcements are encouraging from Google, but I still have plenty of concerns, not to mention skepticism. Meta's internal policies concerning how its models interacted with minors was appalling, so I'm not necessarily ready to believe big tech has the youth's best interest in mind. But any work that helps prevent younger users from forming attachments with AI, or having that AI reinforce dangerous of harmful thoughts, I certainly welcome.

The Download Newsletter Never miss a tech story

Jake Peterson portrait Jake Peterson

Get the latest tech news, reviews, and advice from Jake and the team.

The Download NewsletterNever miss a tech story. Get the latest tech news, reviews, and advice from Jake and the team.