2019
DOI: 10.1007/978-3-030-35888-4_59
|View full text |Cite
|
Sign up to set email alerts
|

Interactive Robot Learning for Multimodal Emotion Recognition

Abstract: Interaction plays a critical role in skills learning for natural communication. In human-robot interaction (HRI), robots can get feedback during the interaction to improve their social abilities. In this context, we propose an interactive robot learning framework using multimodal data from thermal facial images and human gait data for online emotion recognition. We also propose a new decision-level fusion method for the multimodal classification using Random Forest (RF) model. Our hybrid online emotion recogni… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
4
1

Relationship

1
8

Authors

Journals

citations
Cited by 28 publications
(11 citation statements)
references
References 12 publications
(13 reference statements)
0
11
0
Order By: Relevance
“…2) Quantitative Evaluation: We also estimate the generated pose using a Average Position Error (APE) [20] as shown in Eq. 21, where T is the time steps and is equal to 2 Due to Covid19 and the lockdown, we couldn't use the real robot to run the experiments 126; M is the number of testing samples and is equal to 960 (30 batches with batch size 32); y real (m,t) and y generated (m,t) are the the ground truth and prediction of joint position y of sample m at time step t, respectively.…”
Section: Results and Evaluationmentioning
confidence: 99%
“…2) Quantitative Evaluation: We also estimate the generated pose using a Average Position Error (APE) [20] as shown in Eq. 21, where T is the time steps and is equal to 2 Due to Covid19 and the lockdown, we couldn't use the real robot to run the experiments 126; M is the number of testing samples and is equal to 960 (30 batches with batch size 32); y real (m,t) and y generated (m,t) are the the ground truth and prediction of joint position y of sample m at time step t, respectively.…”
Section: Results and Evaluationmentioning
confidence: 99%
“…They have previously performed well on feedback analysis tasks, like in the recent work by Jain et al (2021), who recently successfully used random forests to identify multimodal feedback in clips of test participants, or in the work by Soldner et al (2019), who successfully used random forests to classify whether participants in a study were lying based on multimodal cues. Yu and Tapus (2019) used random forests to classify emotions based on the combined modalities of thermal vision and body pose, finding that the random forest model successfully combined the modalities to achieve better performance than on either modality in isolation.…”
Section: Random Forestmentioning
confidence: 99%
“…An interesting approach was proposed in Yu and Tapus ( 2019 ) for multimodal emotion recognition from thermal facial images and gait analysis. Here, interactive robot learning (IRL) was proposed to take advantage of human feedback obtained by the robot during HRI.…”
Section: State Of the Artmentioning
confidence: 99%