2023
DOI: 10.3389/frai.2023.1229805
|View full text |Cite
|
Sign up to set email alerts
|

A review of the explainability and safety of conversational agents for mental health to identify avenues for improvement

Surjodeep Sarkar,
Manas Gaur,
Lujie Karen Chen
et al.

Abstract: Virtual Mental Health Assistants (VMHAs) continuously evolve to support the overloaded global healthcare system, which receives approximately 60 million primary care visits and 6 million emergency room visits annually. These systems, developed by clinical psychologists, psychiatrists, and AI researchers, are designed to aid in Cognitive Behavioral Therapy (CBT). The main focus of VMHAs is to provide relevant information to mental health professionals (MHPs) and engage in meaningful conversations to support ind… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 8 publications
(6 citation statements)
references
References 120 publications
0
6
0
Order By: Relevance
“…This analysis serves to illustrate the current shortcomings in effectively addressing potentially hazardous mental states, deficiencies that could lead to severe consequences if these systems are deployed without due consideration in the context of mental health support. While conversational AI exhibits promising capabilities, compelling evidence indicates that it may advance at a pace that outstrips associated safety measures [ 28 ]. While the rapid progress in innovation is undeniably fascinating, it is imperative for researchers and developers to proactively prioritize ethical considerations pertaining to transparency, explainability, bias mitigation, user privacy, system access controls, and the micro-targeting of vulnerable populations.…”
Section: Discussionmentioning
confidence: 99%
“…This analysis serves to illustrate the current shortcomings in effectively addressing potentially hazardous mental states, deficiencies that could lead to severe consequences if these systems are deployed without due consideration in the context of mental health support. While conversational AI exhibits promising capabilities, compelling evidence indicates that it may advance at a pace that outstrips associated safety measures [ 28 ]. While the rapid progress in innovation is undeniably fascinating, it is imperative for researchers and developers to proactively prioritize ethical considerations pertaining to transparency, explainability, bias mitigation, user privacy, system access controls, and the micro-targeting of vulnerable populations.…”
Section: Discussionmentioning
confidence: 99%
“…Such attributes are important when LLMs are designed for critical applications like motivational interviewing (Sarkar et al. 2023). Motivational interviewing is a communication style often used in mental health counseling, and ensuring logical coherence and semantic relatedness in generated responses is crucial for effective interactions (Shah et al.…”
Section: Defining Consistency Reliability User‐level Explainability A...mentioning
confidence: 99%
“…In psychiatry, LLMs like ChatGPT can provide accessible mental health services, breaking down geographical, financial, or temporal barriers, which are particularly pronounced in mental health care (4,9). For instance, ChatGPT can support therapists by offering tailored assistance during various treatment phases, from initial assessment to post-treatment recovery (10)(11)(12). This includes aiding in symptom management and encouraging healthy lifestyle changes pertinent to psychiatric care (10)(11)(12)(13)(14).…”
Section: Introductionmentioning
confidence: 99%
“…For instance, ChatGPT can support therapists by offering tailored assistance during various treatment phases, from initial assessment to post-treatment recovery (10)(11)(12). This includes aiding in symptom management and encouraging healthy lifestyle changes pertinent to psychiatric care (10)(11)(12)(13)(14).…”
Section: Introductionmentioning
confidence: 99%