Abstract:Emotion recognition has shown many valuable roles in people's lives under the background of artificial intelligence technology. However, most existing emotion recognition methods have poor recognition performance, which prevents their promotion in practical applications. To alleviate this problem, we proposed an expression-EEG interaction multi-modal emotion recognition method using a deep automatic encoder. Firstly, decision tree is applied as objective feature selection method. Then, based on the facial expr… Show more
“…At present, This technique was supplemented to increase the precision of emotion detection further. Multimodal fusion model can achieve emotion detection results by integrating physiological signals in various ways [58]. With the recent developments in Deep Learning (DL) architectures, deep learning has been applied [59] in multimodal emotion recognition.…”
Section: Multimodal Emotion Recognitionmentioning
confidence: 99%
“…However, when only two physiological signals (EEG,BVP) are considered, the classification accuracy was 71.61%. In [58], [62], they both used CNN, and two physiological signals are considered, but in [62], more than one classification algorithms are used, which had a great impact on the results with an accuracy rate of more than 97% on SEED database. From here, we noticed that with more than one algorithm, the results are better in some cases.…”
New research into human-computer interaction seeks to consider the consumer's emotional status to provide a seamless human-computer interface. This would make it possible for people to survive and be used in widespread fields, including education and medicine. Multiple techniques can be defined through human feelings, including expressions, facial images, physiological signs, and neuroimaging strategies. This paper presents a review of emotional recognition of multimodal signals using deep learning and comparing their applications based on current studies. Multimodal affective computing systems are studied alongside unimodal solutions as they offer higher accuracy of classification. Accuracy varies according to the number of emotions observed, features extracted, classification system and database consistency. Numerous theories on the methodology of emotional detection and recent emotional science address the following topics. This would encourage studies to understand better physiological signals of the current state of the science and its emotional awareness problems.
“…At present, This technique was supplemented to increase the precision of emotion detection further. Multimodal fusion model can achieve emotion detection results by integrating physiological signals in various ways [58]. With the recent developments in Deep Learning (DL) architectures, deep learning has been applied [59] in multimodal emotion recognition.…”
Section: Multimodal Emotion Recognitionmentioning
confidence: 99%
“…However, when only two physiological signals (EEG,BVP) are considered, the classification accuracy was 71.61%. In [58], [62], they both used CNN, and two physiological signals are considered, but in [62], more than one classification algorithms are used, which had a great impact on the results with an accuracy rate of more than 97% on SEED database. From here, we noticed that with more than one algorithm, the results are better in some cases.…”
New research into human-computer interaction seeks to consider the consumer's emotional status to provide a seamless human-computer interface. This would make it possible for people to survive and be used in widespread fields, including education and medicine. Multiple techniques can be defined through human feelings, including expressions, facial images, physiological signs, and neuroimaging strategies. This paper presents a review of emotional recognition of multimodal signals using deep learning and comparing their applications based on current studies. Multimodal affective computing systems are studied alongside unimodal solutions as they offer higher accuracy of classification. Accuracy varies according to the number of emotions observed, features extracted, classification system and database consistency. Numerous theories on the methodology of emotional detection and recent emotional science address the following topics. This would encourage studies to understand better physiological signals of the current state of the science and its emotional awareness problems.
“…Taking into account that the brain regions related to the frontal lobe have high recognition accuracy [28], the 6-channel EEG signals of the forehead and the PPS signals of other remaining channels are used as experimental data in the experiment. The data is downsampled to 128 Hz, and five bands including the delta (4-8 Hz), theta (8-13 Hz), alpha (13)(14)(15)(16)(17)(18)(19)(20)(21)(22)(23)(24)(25)(26)(27)(28)(29)(30), beta (30)(31)(32)(33)(34)(35)(36)(37)(38)(39)(40)(41)(42)(43), and gamma bands (4-43 Hz) are filtered out. Due to the error in the first 3 s of the video in the experiment, the first 3 s of the video are removed, and the middle 30 s of the remaining duration of the video are used as experimental data.…”
Section: A Data Set Settingsmentioning
confidence: 99%
“…Deep learning results show that automatic feature extraction performs better than manual feature extraction; and various deep learning technologies [10], including autoencoders (AEs), convolutional neural networks (CNNs) [11], and recurrent neural networks (RNNs) [12], are widely used in different domains. Among these technologies, CNNs have the ability to find robust spatial features from images, RNNs are suitable for extracting the temporal features of video and speech for classification, and AEs are more suitable for learning unsupervised features [13]. In a CNN, each CNN layer contains some relevant features that represent important information at their respective level of abstraction of the input data.…”
“…Using samples of 55 healthy subjects, Pinto et al used single-peak and multipeak methods to analyse which signals or combinations of signals can better describe emotional responses. Zhang [29] proposed a multimodal emotion recognition method using deep autoencoders for facial expressions and EEG interactions. The decision tree is used as the target feature selection method.…”
Section: Affective Computing Based On Sensor Networkmentioning
The purpose of this paper is to improve the efficiency of performance creative choreography (PCC). Our research work shows that we can realize the model integration and data optimization for PCC in complex environments based on the combined architecture of sensor network (SN) and machine-learning algorithm (MLA). In order to explain the process and content of this research better, this paper designs a specific problem description framework for PCC, which mainly includes the following content: (1) a twin sensor network (TSN) architecture based on digital twin information interaction is proposed, which defines and describes the acquisition method, classification (creative data, rehearsal data, and live data), and temporal and spatial features of performance data. (2) Proposed a mobile computing method based on director semantic annotation (DSA) as the core computing module of TSN. (3) A spatial dynamic line (SDL) model and a creative activation mechanism (CAM) based on DSA are proposed to realize fast and efficient PCC of dance with the TSN architecture. Experimental results show that the TSN architecture proposed in this article is reasonable and effective. The SDL model achieved significantly better performance with little time increase and improved the computability and aesthetics of PCC. New research ideas are proposed to solve the computational problem of PCC in complex environments.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.