2016
DOI: 10.18293/dms2016-030
|View full text |Cite
|
Sign up to set email alerts
|

Towards Formal Multimodal Analysis of Emotions for Affective Computing

Abstract: Abstract-Social robotics is related to the robotic systems and human interaction. Social robots have applications in elderly care, health care, home care, customer service and reception in industrial settings. Human-Robot Interaction (HRI) requires better understanding of human emotion.There are few multimodal fusion systems that integrate limited amount of facial expression, speech and gesture analysis. In this paper, we describe the implementation of a semantic algebra based formal model that integrates six … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
7
2
1

Relationship

1
9

Authors

Journals

citations
Cited by 14 publications
(6 citation statements)
references
References 15 publications
0
6
0
Order By: Relevance
“…Several types of modalities were observed in the survey, with some of them being used in several combinations with other input types [50,66,80,83,[208][209][210][211][212][213][214][215][216][217][218][219][220][221][222][223][224]. Facial expression images, being the prime modality in affect identification, were found in most DBs.…”
Section: Conclusion and Discussionmentioning
confidence: 99%
“…Several types of modalities were observed in the survey, with some of them being used in several combinations with other input types [50,66,80,83,[208][209][210][211][212][213][214][215][216][217][218][219][220][221][222][223][224]. Facial expression images, being the prime modality in affect identification, were found in most DBs.…”
Section: Conclusion and Discussionmentioning
confidence: 99%
“…Choosing the most probable result as the final result, the robot responded to different emotions expressed by the human. Jonathan et al [30] proposed a robust multimodal interaction framework that can realize the interaction between rescuers and flying robots by integrating voice and hand and arm posture data based on post fusion integration in the decision-making layer.…”
Section: A Virtual Teachingmentioning
confidence: 99%
“…The FACS is based on the simulation of facial muscle movement. An action unit (AU) is comprised of segments of the muscles involved in facial expression [6]. Seventeen major AUs are involved in basic facial expression, and all facial expressions are determined by the FACS through identification of these AUs.…”
Section: Introductionmentioning
confidence: 99%