2021
DOI: 10.1016/j.artmed.2021.102061
|View full text |Cite
|
Sign up to set email alerts
|

A machine learning perspective on the emotional content of Parkinsonian speech

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
26
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
1

Relationship

3
5

Authors

Journals

citations
Cited by 20 publications
(29 citation statements)
references
References 35 publications
0
26
0
Order By: Relevance
“…different languages or recording conditions, and combine their knowledge at inference time hereby improving generalizability 26 . In a previous study, using different SER corpora for external validation, the MoE model outperformed all constituent models as well as a single model trained on the pooled data from all the different corpora 20 . Further details on feature extraction, training, and validation of the SER model are provided in the Supplementary Material, hence the following sections in Materials and Methods relate to the depression corpus.…”
Section: Methodsmentioning
confidence: 85%
See 2 more Smart Citations
“…different languages or recording conditions, and combine their knowledge at inference time hereby improving generalizability 26 . In a previous study, using different SER corpora for external validation, the MoE model outperformed all constituent models as well as a single model trained on the pooled data from all the different corpora 20 . Further details on feature extraction, training, and validation of the SER model are provided in the Supplementary Material, hence the following sections in Materials and Methods relate to the depression corpus.…”
Section: Methodsmentioning
confidence: 85%
“…The SER model was trained following Sechidis et al 20 on the public CREMA-D 21 , RAVDESS 22 , and EMO-DB 23 datasets, all of which consist of recordings of sentences spoken by professional actors portraying different emotions. CREMA-D and RAVDESS include American English speech and EMO-DB German speech.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…When the user finishes recording, the system will save the speech information input by the user as a speech file. Figure 4 shows the flow chart of speech acquisition [11]. 6 shows the specific flow chart of MFCC feature extraction [13].…”
Section: Design and Implementation Of Speech Recognitionmentioning
confidence: 99%
“…While it is well accepted that autistic individuals often have atypical voices -e.g., sing-songy or monotone intonation and what has been referred to as inappropriate prosody (Baltaxe & Simmons, 1985;McCann & Peppé, 2003;Patel et al, 2019) -, acoustic investigations of the physical properties of voice underlying such perceptions often present weak or inconsistent findings. A recent meta-analysis identified increased and more variable fundamental frequency, as well as more and longer pauses as common characteristics (Fusaroli et al, 2017), these differences were, however, small and could only be partially replicated on new samples, across biological sexes, and languages (Fusaroli et al, 2018(Fusaroli et al, , 2021. Several ML studies have tried complementary approaches to these piecewise, highly top-down approaches.…”
Section: Introductionmentioning
confidence: 99%