2018 International Joint Conference on Neural Networks (IJCNN) 2018
DOI: 10.1109/ijcnn.2018.8489158
|View full text |Cite
|
Sign up to set email alerts
|

Learning Empathy-Driven Emotion Expressions using Affective Modulations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
41
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 25 publications
(43 citation statements)
references
References 22 publications
0
41
0
Order By: Relevance
“…As an example, in Qureshi et al [ 36 , 73 , 74 ]’s study, the model performance on a test dataset was evaluated by three volunteers who judged if the robot’s action was an appropriate one for the current scenario. In [ 87 ], there both annotators and participants rated whether the robot was able to associate the facial expressions with the conversation context. The independent annotators’ ratings were higher than the participants’, which, as the authors argued, might be explained by discrepancies between the participants’ actual expressed emotion and the intended emotion.…”
Section: Evaluation Methodologiesmentioning
confidence: 99%
See 3 more Smart Citations
“…As an example, in Qureshi et al [ 36 , 73 , 74 ]’s study, the model performance on a test dataset was evaluated by three volunteers who judged if the robot’s action was an appropriate one for the current scenario. In [ 87 ], there both annotators and participants rated whether the robot was able to associate the facial expressions with the conversation context. The independent annotators’ ratings were higher than the participants’, which, as the authors argued, might be explained by discrepancies between the participants’ actual expressed emotion and the intended emotion.…”
Section: Evaluation Methodologiesmentioning
confidence: 99%
“…In these cases, DRL can be useful. In fact, several researchers have begun to examine the applicability of DRL in social robotics [ 35 , 36 , 73 , 74 , 85 , 86 , 87 ].…”
Section: Categorization Of Rl Approaches In Social Robotics Based mentioning
confidence: 99%
See 2 more Smart Citations
“…It has a symmetrical and abstracted child-like appearance that aims to enable intuitive humanrobot interaction while avoiding the uncanny-valley effect. Behind the surface of the head, in the eyebrow and mouth area, a programmable LED display is placed, that can display basic emotions in the form of stylized facial expressions [14], [15]. The head features two 2 Megapixel sensors with a 70-degree field of vision.…”
Section: Affective Association Modellingmentioning
confidence: 99%