2020 7th International Conference on Behavioural and Social Computing (BESC) 2020
DOI: 10.1109/besc51023.2020.9348290
|View full text |Cite
|
Sign up to set email alerts
|

Depression and Anxiety Prediction Using Deep Language Models and Transfer Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
12
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 12 publications
(12 citation statements)
references
References 26 publications
0
12
0
Order By: Relevance
“…Usage statistics and survey results from this study taken together indicate that utilizing regular voice recordings of users answering questions from a smartphone app to analyze their levels of anxiety and depression is feasible. Ellipsis Health has previously published results of semantic ( Rutowski et al, 2019 , 2020 ) and acoustic ( Harati et al, 2021 ) analysis of speech to detect depression and anxiety using models trained, to the best of our knowledge, with the largest database reported in the literature ( Rutowski et al, 2019 ). We have also previously reported this algorithm performance is maintained (i.e., is portable) when applied to the current study population using long short-term memory (LSTM) models ( Rutowski et al, 2020 ).…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…Usage statistics and survey results from this study taken together indicate that utilizing regular voice recordings of users answering questions from a smartphone app to analyze their levels of anxiety and depression is feasible. Ellipsis Health has previously published results of semantic ( Rutowski et al, 2019 , 2020 ) and acoustic ( Harati et al, 2021 ) analysis of speech to detect depression and anxiety using models trained, to the best of our knowledge, with the largest database reported in the literature ( Rutowski et al, 2019 ). We have also previously reported this algorithm performance is maintained (i.e., is portable) when applied to the current study population using long short-term memory (LSTM) models ( Rutowski et al, 2020 ).…”
Section: Discussionmentioning
confidence: 99%
“…Ellipsis Health has previously published results of semantic ( Rutowski et al, 2019 , 2020 ) and acoustic ( Harati et al, 2021 ) analysis of speech to detect depression and anxiety using models trained, to the best of our knowledge, with the largest database reported in the literature ( Rutowski et al, 2019 ). We have also previously reported this algorithm performance is maintained (i.e., is portable) when applied to the current study population using long short-term memory (LSTM) models ( Rutowski et al, 2020 ). The newer transformer methodology was also portable (performance was within 10% when comparing AUCs of the original training dataset and this study’s dataset).…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Second, a recent systematic review found that no tool so far was equipped to evaluate D&A simultaneously. About half of the studies on speech-based mental health assessment focused on depression, but only a small number (5%) examined anxiety, whereas studies of both conditions are rare [ 36 , 38 ]. It is problematic because depression and anxiety often co-occur, and anxiety disorders are highly prevalent among AYA survivors, e.g., health anxiety or fear of reoccurrence.…”
Section: Introductionmentioning
confidence: 99%
“…The models that underlie the EH Voice Tool show top performance in comparison to results published in the literature [ 44 ]. Specifically, results for the EH Voice Tool’s performance on PHQ/GAD prediction are currently 0.85/0.84 for area under the curve (AUC; for binary classification with a threshold of 10), 4.25/4.47 for root mean square error (RMSE, a measure of average regression error), and 3.13/3.23 for mean absolute error (MAE, another measure of average regression error) [ 38 , 40 ]. Finally, to the best of our knowledge, the EH Voice Tool is the only speech-based distress screening and monitoring tool that simultaneously generates results for depression and anxiety scores, an essential strategy for reducing fatigue over long-term and repetitive screening.…”
Section: Introductionmentioning
confidence: 99%