2017
DOI: 10.1016/j.cmpb.2016.12.005
|View full text |Cite
|
Sign up to set email alerts
|

Recognition of emotions using multimodal physiological signals and an ensemble deep learning model

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

4
117
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 315 publications
(121 citation statements)
references
References 43 publications
4
117
0
Order By: Relevance
“…Deep learning has also been used as part of ensemble methods [169], and in the form of Echo State Networks [170] for dimensionality reduction of handcrafted EEG features. Yin et al [171] used stacked autoencoders (SAEs) to learn high-level representations from various peripheral sensors including skin temperature and blood volume pressure, improving the state-of-the-art by 5%.…”
Section: Learning Temporal Features From Physiological Datamentioning
confidence: 99%
See 1 more Smart Citation
“…Deep learning has also been used as part of ensemble methods [169], and in the form of Echo State Networks [170] for dimensionality reduction of handcrafted EEG features. Yin et al [171] used stacked autoencoders (SAEs) to learn high-level representations from various peripheral sensors including skin temperature and blood volume pressure, improving the state-of-the-art by 5%.…”
Section: Learning Temporal Features From Physiological Datamentioning
confidence: 99%
“…Feature-level fusion can also be based solely on physiological measurements. Yin et al [171] successfully used a fusion SAE to aggregate handcrafted features from several different sensors. Similarly, Liu et al [175] used handcrafted features derived from EEG and eye tracking as input into a SAE.…”
Section: Learning Joint Features With Physiological Datamentioning
confidence: 99%
“…An auto-encoder (AE) based on Restricted Boltzmann Machine (RBM) is used to fuse the two kinds of signals, and then SVM is applied to recognize emotion in terms of valence. Yin et al [9] also conduct emotion recognition with a single-type EP signal and a single-channel EEG signal. The difference is that the signals are normalized at first and an AE based on fully connected neural networks (FCNNs) is used for feature fusion.…”
Section: Introductionmentioning
confidence: 99%
“…Zhuang et al[6] Liu et al[8] Yin et al[9] Cheng et al[27] ; mRMR, minimum redundancy-maximum-relevance; EOG, electrooculogram; SVM, support vector machine; GSR, galvanic skin response; FCNN, fully connected neural network.…”
mentioning
confidence: 99%
“…Yin, et al [21] introduced a Multiple fusion layer based Ensemble Classifier of Stacked Auto Encoder (MESAE) for recognizing the emotions of the facial image. Here, the over fitting was avoided by estimating the artificial feature vector.…”
Section: Related Workmentioning
confidence: 99%