Proceedings of the Eighth Workshop on Computational Linguistics and Clinical Psychology 2022
DOI: 10.18653/v1/2022.clpsych-1.3
|View full text |Cite
|
Sign up to set email alerts
|

Explaining Models of Mental Health via Clinically Grounded Auxiliary Tasks

Abstract: Models of mental health based on natural language processing can uncover latent signals of mental health from language. Models that indicate whether an individual is depressed, or has other mental health conditions, can aid in diagnosis and treatment. A critical aspect of integration of these models into the clinical setting relies on explaining their behavior to domain experts. In the case of mental health diagnosis, clinicians already rely on an assessment framework to make these decisions; that framework ca… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

1
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 25 publications
(25 reference statements)
1
1
0
Order By: Relevance
“…In addition, explainability saw an uptake of 23% in terms of safety across the same language models. Similar results were noticed when PHQ-9 was used in explainable training of language models (Zirikly and Dredze, 2022 ). Given these circumstances, VMHAs can efficiently integrate with clinical practice guidelines such as PHQ-9 and GAD-7, utilizing reinforcement learning.…”
Section: Safe and Explainable Language Models In Mental Healthsupporting
confidence: 79%
See 1 more Smart Citation
“…In addition, explainability saw an uptake of 23% in terms of safety across the same language models. Similar results were noticed when PHQ-9 was used in explainable training of language models (Zirikly and Dredze, 2022 ). Given these circumstances, VMHAs can efficiently integrate with clinical practice guidelines such as PHQ-9 and GAD-7, utilizing reinforcement learning.…”
Section: Safe and Explainable Language Models In Mental Healthsupporting
confidence: 79%
“…Adherence can be thought of as aligning the question generation and response shaping process in a VMHA to external clinical knowledge such as PHQ-9. For instance, Roy et al and Zirikly et al demonstrated that under the influence of datasets grounded in clinical knowledge, the generative model of VMHA can provide clinician-friendly explanations (Zirikly and Dredze, 2022 ; Roy et al, 2023 ). Another form of adherence is in the form of regulating medication adherence in users.…”
Section: Discussionmentioning
confidence: 99%