Topics:
AI

AI Chatbots' Inconsistent Responses Raise Mental Health Concerns

A recent study reveals AI chatbots provide inconsistent responses to suicide-related inquiries, raising mental health concerns.

Key Points

  • • Study published in *Psychiatric Services* highlights chatbot inconsistencies.
  • • AI chatbots refuse high-risk questions but vary on medium-risk inquiries.
  • • Multiple U.S. states have banned AI use in therapy to protect users.
  • • Ryan McBain raises concerns about the role of chatbots in mental health.

A recent study published in *Psychiatric Services* has drawn attention to the inconsistent responses of AI chatbots when addressing suicide-related inquiries, highlighting potential risks for vulnerable users. As more young people seek mental health support through chatbots, the need for careful regulation becomes paramount.

The study analyzed responses from three leading AI chatbots—OpenAI's ChatGPT, Google's Gemini, and Anthropic's Claude—across various questions categorized by risk. All chatbots consistently refused to answer high-risk questions regarding suicide methods, generally directing users to seek professional help or contact a support line. However, responses to medium-risk questions showed alarming variability, with ChatGPT and Claude sometimes providing concerning information about suicidal behaviors, while Gemini was less responsive to such queries.

Amidst growing concerns, multiple U.S. states have enacted bans on AI's use in therapeutic settings to protect individuals from unregulated AI products. Despite these measures, people continue to seek guidance on sensitive issues including eating disorders and depression via chatbots. Ryan McBain, a senior policy researcher at Rand Corporation, emphasized the ambiguity surrounding whether chatbots deliver therapeutic advice or merely offer companionship, underscoring the complex role AI plays in mental health support.