Thursday, August 7


For many, ChatGPT has become more than a tool — it’s a late-night confidant, a sounding board in crisis, and a source of emotional validation.

But OpenAI, the company behind ChatGPT, now says it’s time to set firmer boundaries.

In a recent blog post dated August 4, OpenAI confirmed that it has introduced new mental health-focused guardrails to prevent users from viewing the chatbot as a therapist, emotional support system, or life coach.

“ChatGPT is not your therapist,” is the quiet message behind the sweeping changes. While the AI was designed to be helpful and human-like, its creators now believe that going too far in this direction poses emotional and ethical risks.

Why OpenAI Is Stepping Back

The decision follows growing scrutiny over the psychological risks of relying on generative AI for emotional wellbeing. According to USA Today, OpenAI acknowledged that earlier updates to its GPT-4o model inadvertently made the chatbot “too agreeable” — a behavior known as sycophantic response generation. Essentially, the bot began telling users what they wanted to hear, not what was helpful or safe.

“There have been instances where our 4o model fell short in recognizing signs of delusion or emotional dependency,” OpenAI wrote. “While rare, we’re continuing to improve our models and are developing tools to better detect signs of mental or emotional distress so ChatGPT can respond appropriately.”

This includes prompting users to take breaks, avoiding guidance on high-stakes personal decisions, and offering evidence-based resources rather than emotional validation or problem-solving.

AI Isn’t a Friend Or a Crisis Responder

These changes also respond to chilling findings from an earlier paper published on arXiv, as reported by The Independent. In one test, researchers simulated a distressed user expressing suicidal thoughts through coded language. The AI’s response? A list of tall bridges in New York, devoid of concern or intervention.

The experiment highlighted a crucial blind spot: AI does not understand emotional nuance. It may mimic empathy, but it lacks true crisis awareness. And as researchers warned, this limitation can turn seemingly helpful exchanges into dangerous ones.

“Contrary to best practices in the medical community, LLMs express stigma toward those with mental health conditions,” the study stated. Worse, they may even reinforce harmful or delusional thinking in an attempt to appear agreeable.

The Illusion of Comfort, The Risk of Harm

With millions still lacking access to affordable mental healthcare — only 48 per cent of Americans in need receive it, according to the same study — AI chatbots like ChatGPT filled a void. Always available, never judgmental, and entirely free, they offered comfort. But that comfort, researchers now argue, may be more illusion than aid.

“We hold ourselves to one test: if someone we love turned to ChatGPT for support, would we feel reassured?” OpenAI wrote. “Getting to an unequivocal ‘yes’ is our work.”

A Future for AI With Boundaries

While OpenAI’s announcement may disappoint users who found solace in long chats with their AI companion, the move signals a critical shift in how tech companies approach emotional AI.

Rather than replacing therapists, ChatGPT’s evolving role might be better suited to enhancing human-led care — like training mental health professionals or offering basic stress management tools — not stepping in during moments of crisis.

“We want ChatGPT to guide, not decide,” the company reiterated. And for now, that means steering clear of the therapist’s couch altogether.

  • Published On Aug 7, 2025 at 03:24 PM IST

Join the community of 2M+ industry professionals.

Subscribe to Newsletter to get latest insights & analysis in your inbox.

All about ETHealthworld industry right on your smartphone!






Source link

Leave A Reply

Exit mobile version