The platform will undergo maintenance on Sep 14 at about 9:30 AM EST and will be unavailable for approximately 1 hour.
2020
DOI: 10.1109/access.2020.3021994
|View full text |Cite
|
Sign up to set email alerts
|

Expression-EEG Based Collaborative Multimodal Emotion Recognition Using Deep AutoEncoder

Abstract: Emotion recognition has shown many valuable roles in people's lives under the background of artificial intelligence technology. However, most existing emotion recognition methods have poor recognition performance, which prevents their promotion in practical applications. To alleviate this problem, we proposed an expression-EEG interaction multi-modal emotion recognition method using a deep automatic encoder. Firstly, decision tree is applied as objective feature selection method. Then, based on the facial expr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
30
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 76 publications
(30 citation statements)
references
References 38 publications
0
30
0
Order By: Relevance
“…At present, This technique was supplemented to increase the precision of emotion detection further. Multimodal fusion model can achieve emotion detection results by integrating physiological signals in various ways [58]. With the recent developments in Deep Learning (DL) architectures, deep learning has been applied [59] in multimodal emotion recognition.…”
Section: Multimodal Emotion Recognitionmentioning
confidence: 99%
See 1 more Smart Citation
“…At present, This technique was supplemented to increase the precision of emotion detection further. Multimodal fusion model can achieve emotion detection results by integrating physiological signals in various ways [58]. With the recent developments in Deep Learning (DL) architectures, deep learning has been applied [59] in multimodal emotion recognition.…”
Section: Multimodal Emotion Recognitionmentioning
confidence: 99%
“…However, when only two physiological signals (EEG,BVP) are considered, the classification accuracy was 71.61%. In [58], [62], they both used CNN, and two physiological signals are considered, but in [62], more than one classification algorithms are used, which had a great impact on the results with an accuracy rate of more than 97% on SEED database. From here, we noticed that with more than one algorithm, the results are better in some cases.…”
Section: And Table II Abovementioning
confidence: 99%
“…Taking into account that the brain regions related to the frontal lobe have high recognition accuracy [28], the 6-channel EEG signals of the forehead and the PPS signals of other remaining channels are used as experimental data in the experiment. The data is downsampled to 128 Hz, and five bands including the delta (4-8 Hz), theta (8-13 Hz), alpha (13)(14)(15)(16)(17)(18)(19)(20)(21)(22)(23)(24)(25)(26)(27)(28)(29)(30), beta (30)(31)(32)(33)(34)(35)(36)(37)(38)(39)(40)(41)(42)(43), and gamma bands (4-43 Hz) are filtered out. Due to the error in the first 3 s of the video in the experiment, the first 3 s of the video are removed, and the middle 30 s of the remaining duration of the video are used as experimental data.…”
Section: A Data Set Settingsmentioning
confidence: 99%
“…Deep learning results show that automatic feature extraction performs better than manual feature extraction; and various deep learning technologies [10], including autoencoders (AEs), convolutional neural networks (CNNs) [11], and recurrent neural networks (RNNs) [12], are widely used in different domains. Among these technologies, CNNs have the ability to find robust spatial features from images, RNNs are suitable for extracting the temporal features of video and speech for classification, and AEs are more suitable for learning unsupervised features [13]. In a CNN, each CNN layer contains some relevant features that represent important information at their respective level of abstraction of the input data.…”
Section: Introductionmentioning
confidence: 99%
“…Using samples of 55 healthy subjects, Pinto et al used single-peak and multipeak methods to analyse which signals or combinations of signals can better describe emotional responses. Zhang [29] proposed a multimodal emotion recognition method using deep autoencoders for facial expressions and EEG interactions. The decision tree is used as the target feature selection method.…”
Section: Affective Computing Based On Sensor Networkmentioning
confidence: 99%