2016
DOI: 10.1007/s12652-016-0395-y
|View full text |Cite
|
Sign up to set email alerts
|

Coupled HMM-based multimodal fusion for mood disorder detection through elicited audio–visual signals

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
18
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 30 publications
(18 citation statements)
references
References 39 publications
0
18
0
Order By: Relevance
“…However, bimodal emotion recognition can reach an accuracy of 86.85%, an increase of 5% compared with using a single modal of emotion recognition (Song et al 2015;Chuang and Wu 2004;Kessous et al 2010). Furthermore, previous studies have indicated that it is impossible to achieve satisfactory results by recognizing emotions based on a single model for either speech or facial expression (Ma et al 2019;Yang et al 2017). Accordingly, this study applied a bimodal emotion recognition system by using both facial expression recognition (55%) and speech recognition (45%).…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…However, bimodal emotion recognition can reach an accuracy of 86.85%, an increase of 5% compared with using a single modal of emotion recognition (Song et al 2015;Chuang and Wu 2004;Kessous et al 2010). Furthermore, previous studies have indicated that it is impossible to achieve satisfactory results by recognizing emotions based on a single model for either speech or facial expression (Ma et al 2019;Yang et al 2017). Accordingly, this study applied a bimodal emotion recognition system by using both facial expression recognition (55%) and speech recognition (45%).…”
Section: Methodsmentioning
confidence: 99%
“…The reliability study has accomplished by stratified random sampling 1067 children in Taiwan. Cronbach's a has reached 0.98 and the validity study on Kaiser-Meyer-Olkin (KMO) has reached 0.951 (Wei 2011). It is a 5-level scale comprised four subscales and each subscale is composed of ten emotional competencies.…”
Section: Ecsycmentioning
confidence: 99%
See 1 more Smart Citation
“…At approximately the same time McIntyre et al [83] proposed an AU-based approach in the form of Region Units (RUs). Several studies have reported promising results on the application of AUs to automatic depression assessment [84], [85], [86], [87], [88], [89], [90], [91], [92], [93], [94], [95], [96]. Specific facial expressions, have also been examined for depression assessment, in terms of frequency of occurrence, variability, and intensity of a specific expression.…”
Section: Nonverbal Signs For Depression Assessmentmentioning
confidence: 99%
“…The selection criteria used depended greatly on the research question. DSM and HAM-D criteria were used for detection of depression [82], [90], [110], [111], [113], [132] or differentiation from Bipolar Disorder [95]. Others had more specific criteria, i.e., patients recovering from Deep Brain Stimulation of the Subcallosal Cingulate Cortex (DBS-SCC) [133], in order to monitor recovery progress.…”
Section: Reported Datasetsmentioning
confidence: 99%