Large language models (LLMs) built on artificial intelligence (AI) – such as ChatGPT and GPT-4 – hold immense potential to support, augment, or even replace psychotherapy. Enthusiasm about such applications is mounting in the field as well as industry. These developments promise to address insufficient mental healthcare system capacity and scale individual access to personalized treatments. However, clinical psychology is an uncommonly high stakes application domain for AI systems, as responsible and evidence-based therapy requires nuanced expertise. Here, we provide a roadmap for ambitious yet responsible applications of clinical LLMs. First, we discuss potential applications of clinical LLMs to clinical care, training, and research, emphasizing imminent applications while highlighting areas that present risk given the high-stakes, complex nature of psychotherapy. Second, we outline a continuum of assistive to fully autonomous clinical LLM applications that could be integrated into digital treatment modalities, analogous to the development of autonomous vehicle technology. Third, we outline recommendations for the responsible development of clinical LLMs, which should center clinical science and improvement, involve robust interdisciplinary collaboration, and attend to issues like assessment, risk detection, transparency, and bias. Fourth, we offer recommendations for the critical evaluation of clinical LLMs, arguing that psychologists are uniquely positioned to scope and guide the development and evaluation of clinical LLMs. Last, we outline a vision for how LLMs might allow for a new generation of studies of evidence-based interventions at scale, and how these studies may challenge assumptions about psychotherapy.
Clinical scientists disagree about whether worry and rumination are distinct or represent a unitary construct. To inform this debate, we performed a series of meta-analyses evaluating the relationship between worry and different forms of rumination. A total of 719 effect sizes ( N = 69,305) were analyzed. Worry showed a large association with global rumination and with the brooding and emotion-focused subtypes of rumination ( rs = .51–.53). However, even when corrected for measurement error, the correlations did not approach unity (ρs = .57–.62). Worry showed a smaller, though still significant, association with the reflection subtype of rumination ( r = .28, ρ = .34). Characteristics of the study, sample, and measures moderated the worry–rumination relationship. Worry and rumination, as indexed by current self-report measures, reflect closely related but nonredundant constructs. Given that these constructs have both common and distinct features, researchers should select between them carefully and, when possible, study them together.
There is growing scientific excitement about detecting depression from people’s language use, but this work rarely accounts for anxiety, which overlaps substantially and co-occurs frequently with depression. Using clinical interviews with individuals with varying levels of depression and anxiety, we found that some language patterns are shared by these conditions, whereas other patterns distinguish them. Depressed individuals show more I-usage (e.g., “I,” “me,” “my”) and sadness words (e.g., “low,” “sad,” “alone”), while anxious individuals use a much broader array of negative emotionality language (e.g., anxiety, stress, and counterintuitively, depression), raising implications for the understanding and automatic assessment of these conditions.
Depression has been associated with heightened first-person singular pronoun use (I-usage; e.g., “I,” “my”) and negative emotion words. However, past research has relied on nonclinical samples and nonspecific depression measures, raising the question of whether these features are unique to depression vis-à-vis frequently co-occurring conditions, especially anxiety. Using structured questions about recent life changes or difficulties, we interviewed a sample of individuals with varying levels of depression and anxiety (N = 486), including individuals in a major depressive episode (n = 228) and/or diagnosed with generalized anxiety disorder (n = 273). Interviews were transcribed to provide a natural language sample. Analyses isolated language features associated with gold standard, clinician-rated measures of depression and anxiety. Many language features associated with depression were in fact shared between depression and anxiety. Language markers with relative specificity to depression included I-usage, sadness, and decreased positive emotion, while negations (e.g., “not,” “no”), negative emotion, and several emotional language markers (e.g., anxiety, stress, depression) were relatively specific to anxiety. Several of these results were replicated using a self-report measure designed to disentangle components of depression and anxiety. We next built machine learning models to detect severity of common and specific depression and anxiety using only interview language. Individuals’ speech characteristics during this brief interview predicted their depression and anxiety severity, beyond other clinical and demographic variables. Depression and anxiety have partially distinct patterns of expression in spoken language. Monitoring of depression and anxiety severity via language can augment traditional assessment modalities and aid in early detection.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.