2020
DOI: 10.1016/j.inffus.2019.06.019
|View full text |Cite
|
Sign up to set email alerts
|

A snapshot research and implementation of multimodal information fusion for data-driven emotion recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
46
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 136 publications
(53 citation statements)
references
References 61 publications
0
46
0
Order By: Relevance
“…The current research on emotion recognition also focuses on multimodal fusion which is the combination of sensor data from various modalities to increase the accuracy of the classification of emotions [14,21,36]. There are basically three types of multimodal fusion methods which are data-level, feature-level or early fusion, and decision-level fusion or late fusion [14].…”
Section: Related Workmentioning
confidence: 99%
“…The current research on emotion recognition also focuses on multimodal fusion which is the combination of sensor data from various modalities to increase the accuracy of the classification of emotions [14,21,36]. There are basically three types of multimodal fusion methods which are data-level, feature-level or early fusion, and decision-level fusion or late fusion [14].…”
Section: Related Workmentioning
confidence: 99%
“…FER refers to the use of computers to analyze human facial expressions and judge human psychology and emotions through pattern recognition and machine learning algorithms, thereby achieving intelligent human-computer interaction [1]. Traditional FER methods generally include three steps: face detection, feature extraction, and expression recognition [2,3].…”
Section: Introductionmentioning
confidence: 99%
“…The first and easiest involves training a single model for every single source and finally perform a score-level fusion [24]. The second and the hardest requires featurelevel fusion in order to allow the models to take advantage of the intrinsic correlations among different sources [25] but this is typically more difficult as their representations are not always directly compatible. So the problem becomes to find a proper representation of the set of sources to exploit the information presented.…”
Section: Introductionmentioning
confidence: 99%