An Examination of Generative AI Response to Suicide Inquires: Content Analysis

Generative artificial intelligence (AI) chatbots are an online source of information consulted by adolescents to gain insight into mental health and wellness behaviors. However, the accuracy and content of generative AI responses to questions related to suicide have not been systematically investiga...

Full description

Saved in:
Bibliographic Details
Published inJMIR mental health Vol. 12; p. e73623
Main Authors Campbell, Laurie O, Babb, Kathryn, Lambie, Glenn W, Hayes, B Grant
Format Journal Article
LanguageEnglish
Published Canada JMIR Publications 14.08.2025
Subjects
Online AccessGet full text
ISSN2368-7959
2368-7959
DOI10.2196/73623

Cover

More Information
Summary:Generative artificial intelligence (AI) chatbots are an online source of information consulted by adolescents to gain insight into mental health and wellness behaviors. However, the accuracy and content of generative AI responses to questions related to suicide have not been systematically investigated. This study aims to investigate general (not counseling-specific) generative AI chatbots' responses to questions regarding suicide. A content analysis was conducted of the responses of generative AI chatbots to questions about suicide. In phase 1 of the study, generative chatbots examined include: (1) Google Bard or Gemini; (2) Microsoft Bing or CoPilot; (3) ChatGPT 3.5 (OpenAI); and (4) Claude (Anthropic). In phase 2 of the study, additional generative chatbot responses were analyzed, which included Google Gemini, Claude 2 (Anthropic), xAI Grok 2, Mistral AI, and Meta AI (Meta Platforms). The two phases occurred a year apart. Findings included a linguistic analysis of the authenticity and tone within the responses using the Linguistic Inquiry and Word Count program. There was an increase in the depth and accuracy of the responses between phase 1 and phase 2 of the study. There is evidence that the responses by the generative AI chatbots were more comprehensive and responsive during phase 2 than phase 1. Specifically, the responses were found to provide more information regarding all aspects of suicide (eg, signs of suicide, lethality, resources, and ways to support those in crisis). Another difference noted in the responses between the first and second phases was the emphasis on the 988 suicide hotline number. While this dynamic information may be helpful for youth in need, the importance of individuals seeking help from a trained mental health professional remains. Further, generative AI algorithms related to suicide questions should be checked periodically to ensure best practices regarding suicide prevention are being communicated.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:2368-7959
2368-7959
DOI:10.2196/73623