2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2019
DOI: 10.1109/cvprw.2019.00035
|View full text |Cite
|
Sign up to set email alerts
|

Modelling Multi-Channel Emotions Using Facial Expression and Trajectory Cues for Improving Socially-Aware Robot Navigation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
1
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(4 citation statements)
references
References 35 publications
0
1
0
Order By: Relevance
“…Furthermore, Bera et al [26] attempted to classify the personality of each pedestrian in the crowd to differentiate the sizes of personal spaces of individuals. Subsequently, the emotional state of the pedestrians was also inferred and embedded for socially aware navigation [27,241,242].…”
Section: Diversity Contextmentioning
confidence: 99%
“…Furthermore, Bera et al [26] attempted to classify the personality of each pedestrian in the crowd to differentiate the sizes of personal spaces of individuals. Subsequently, the emotional state of the pedestrians was also inferred and embedded for socially aware navigation [27,241,242].…”
Section: Diversity Contextmentioning
confidence: 99%
“…Understanding human impressions of robot performance is important. They can be used to evaluate robot policies [15]- [17] and to create better robot behavior [7], [18]- [20], increasing the likelihood of robot adoption. In this work, we focus on inferring three robot performance dimensions relevant to navigation [12]: robot competence, the surprisingness of robot behavior, and clear intent.…”
Section: Related Workmentioning
confidence: 99%
“…Bera et al [111] proposed an emotion-aware navigation algorithm for social robots which combined emotions learned from facial expressions and walking trajectories using an onboard and an overhead camera respectively. The approach achieved accurate emotion detection and enabled socially conscious robot navigation in low-to-medium-density environments.…”
Section: Inferencementioning
confidence: 99%