2018 First Asian Conference on Affective Computing and Intelligent Interaction (ACII Asia) 2018
DOI: 10.1109/aciiasia.2018.8470381
|View full text |Cite
|
Sign up to set email alerts
|

WT Feature Based Emotion Recognition from Multi-channel Physiological Signals with Decision Fusion

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 11 publications
(10 citation statements)
references
References 9 publications
0
10
0
Order By: Relevance
“…The literature has shown that the classification performance improves with the simultaneous exploitation of different signal modalities [21], [130]. Modality fusion can be performed at two main levels: feature fusion [23], [131], [132] and classifier fusion [21], [71], [130], [133]. In the former, features are extracted from each modality and latter concatenated to form a single feature vector space, to be used as input for the ML model.…”
Section: C: Feature Fusionmentioning
confidence: 99%
“…The literature has shown that the classification performance improves with the simultaneous exploitation of different signal modalities [21], [130]. Modality fusion can be performed at two main levels: feature fusion [23], [131], [132] and classifier fusion [21], [71], [130], [133]. In the former, features are extracted from each modality and latter concatenated to form a single feature vector space, to be used as input for the ML model.…”
Section: C: Feature Fusionmentioning
confidence: 99%
“…proposed a new emotion recognition framework based on multi-channel physiological signals, including electrocardiogram (ECG), electromyogram (EMG), and serial clock line (SCL), using the dataset of Bio Vid Emo DB, and evaluated a series of feature selection methods and fusion methods. Finally, they achieved 94.81% accuracy in their dataset [9]. In a study using the MAHNOB-human computer interface (MAHNOB-HCI) dataset, Sander Koelstra et al performed binary classification based on the valence-arousal-dominance emotion model using a fusion of EEG and facial expression and found that the accuracies of valence, arousal, and control were 68.5%, 73%, and 68.5%, respectively [10].…”
Section: Introductionmentioning
confidence: 99%
“…Authors in [100] presented a new emotion recognition framework based on Decision fusion, where three separate classifiers were trained on ECG, EMG and skin conductance level(SCL). The majority voting principle was used to determine a final classification result on the three outputs of the separate classifiers.…”
Section: Multimodal Fusionmentioning
confidence: 99%