2019
DOI: 10.1007/978-3-030-16272-6_11
|View full text |Cite
|
Sign up to set email alerts
|

Survey on AI-Based Multimodal Methods for Emotion Detection

Abstract: Automatic emotion recognition constitutes one of the great challenges providing new tools for more objective and quicker diagnosis, communication and research. Quick and accurate emotion recognition may increase possibilities of computers, robots, and integrated environments to recognize human emotions, and response accordingly to them a social rules. The purpose of this paper is to investigate the possibility of automated emotion representation, recognition and prediction its state-of-the-art and main directi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
33
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 87 publications
(42 citation statements)
references
References 55 publications
0
33
0
Order By: Relevance
“…Detailed information on the current state-of-the-art in a more generalized perspective, we refer the reader to the surveys [ 2 , 11 , 43 , 44 , 45 , 46 , 47 ] and references therein, where a comprehensive review of the latest work on ER using ML and physiological signals can be found, highlighting the main achievements, challenges, take-home messages, and possible future opportunities.…”
Section: State Of the Artmentioning
confidence: 99%
“…Detailed information on the current state-of-the-art in a more generalized perspective, we refer the reader to the surveys [ 2 , 11 , 43 , 44 , 45 , 46 , 47 ] and references therein, where a comprehensive review of the latest work on ER using ML and physiological signals can be found, highlighting the main achievements, challenges, take-home messages, and possible future opportunities.…”
Section: State Of the Artmentioning
confidence: 99%
“…The robot has to understand the explicit and implicit communicative cues people produce with their body, mostly the affective expressions. Although many systems have been proposed to detect emotion from facial expressions [59,60], they often have relatively low accuracy, leading to too quick shift in interpretation and quite demanding requirements both about face resolution and the expression of facial cues [61], often required to be unnaturally expanded to be recognized. Work has still to be done on body expressions [62] and on subtle cues, whose detection is also limited by sensor resolution, the learning models which cannot come with too high complexity, and the situations where interaction actually occurs, with subjects moving fast in front of the robot, and reaching positions out of the camera range.…”
Section: Visual Interaction: Human To Robotmentioning
confidence: 99%
“…Moreover, images of a person are as easy to record as their speech, since a camera record both image and sound whereas physiological measures are clearly less convenient to be recorded. The accuracy of expression recognition is usually improved when it combines the analysis of human expressions from multimodal forms all together [14]. However, considering only a sub-part of channels such as only the face of a person or the posture stays an important way to improve knowledge of the whole domain since multi-modal approaches generally combine specific approaches dedicated to a single channel [15].…”
Section: Related Workmentioning
confidence: 99%