2019
DOI: 10.1186/s12888-019-2300-7
|View full text |Cite
|
Sign up to set email alerts
|

Acoustic differences between healthy and depressed people: a cross-situation study

Abstract: Background Abnormalities in vocal expression during a depressed episode have frequently been reported in people with depression, but less is known about if these abnormalities only exist in special situations. In addition, the impacts of irrelevant demographic variables on voice were uncontrolled in previous studies. Therefore, this study compares the vocal differences between depressed and healthy people under various situations with irrelevant variables being regarded as covariates. … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
39
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 53 publications
(41 citation statements)
references
References 44 publications
1
39
1
Order By: Relevance
“…Speech has been demonstrated to have diagnostic validity for Alzheimer's disease (AD) and mild cognitive impairment (MCI) in studies using machine-learning classification models to differentiate individuals with AD/MCI from healthy individuals based on speech samples [34][35][36][37][38][39][40][41]. Additionally, speech analysis has been shown to be able to detect individuals with depression [42][43][44][45], schizophrenia [46][47][48][49], autism spectrum disorder [50], and Parkinson's disease [51,52], and can differentiate the subtypes of primary progressive aphasia and frontotemporal dementia [53][54][55]. Classification models provide diagnostic validity for speech measures and could be used to develop tools for disease screening and diagnosis.…”
Section: Clinical Validationmentioning
confidence: 99%
“…Speech has been demonstrated to have diagnostic validity for Alzheimer's disease (AD) and mild cognitive impairment (MCI) in studies using machine-learning classification models to differentiate individuals with AD/MCI from healthy individuals based on speech samples [34][35][36][37][38][39][40][41]. Additionally, speech analysis has been shown to be able to detect individuals with depression [42][43][44][45], schizophrenia [46][47][48][49], autism spectrum disorder [50], and Parkinson's disease [51,52], and can differentiate the subtypes of primary progressive aphasia and frontotemporal dementia [53][54][55]. Classification models provide diagnostic validity for speech measures and could be used to develop tools for disease screening and diagnosis.…”
Section: Clinical Validationmentioning
confidence: 99%
“…As depression has been linked to changes of loudness in past research [30, 55], we calculated associations of depression scores with voice features ( r = 0.24, t =6.6, p < 0.001). However, depression scores were not related to ADHD symptom severity, thus we did not include it as covariate in the main analysis.…”
Section: Resultsmentioning
confidence: 99%
“…The prediction of schizophrenia has been successful using linguistic features such as semantic relatedness of individuals with schizophrenia or those at a high risk to develop acute symptoms [42, 61-64]. Looking at the studies that have investigated the machine learning based prediction of ADHD from other biological signals points towards a similar performance of approaches based on neuropsychological performance [65], EEG-measures [55, 66], questionnaires [67] or resting state fMRI [68-70] as compared to our findings. In summary, the findings in this study are broadly comparable to previous research using voice to predict mental disorders or other biological signals to predict ADHD.…”
Section: Discussionmentioning
confidence: 99%
“…Multiple studies have used voice characteristics as objective markers to understand and differentiate various mental states and psychiatric disorders [ 15 ]. These include investigations of voice in depression that identified many acoustic markers [ 13 , 16 , 17 ]. In another study, researchers were able to classify depressed and healthy speech using deep learning techniques applied to both audio and text features [ 18 ].…”
Section: Introductionmentioning
confidence: 99%