2022 International Conference on Robotics and Automation (ICRA) 2022
DOI: 10.1109/icra46639.2022.9812079
|View full text |Cite
|
Sign up to set email alerts
|

Using Eye Gaze to Forecast Human Pose in Everyday Pick and Place Actions

Abstract: Collaborative robots that operate alongside humans require the ability to understand their intent and forecast their pose. Among the various indicators of intent, the eye gaze is particularly important as it signals action towards the gazed object. By observing a person's gaze, one can effectively predict the object of interest and subsequently, forecast the person's pose. We leverage this and present a method that forecasts the human pose using gaze information for everyday pick and place actions in a home en… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 27 publications
(49 reference statements)
0
1
0
Order By: Relevance
“…These works can also be categorized into deterministic (Martinez, Black, and Romero 2017) or stochastic (Liu et al 2021), using Variational Autoencoders (VAEs) (Kingma and Welling 2013) or Generative Adversarial Networks (GANs) (Goodfellow et al 2014) respectively, with the design choice hinging on whether there is sufficient variation to be learnt by the model. Many recent works incorporate additional context such as scene (Corona et al 2020), eye gaze (Razali and Demiris 2022b;Zheng et al 2022), or object coordinates (Razali and Demiris 2022a).…”
Section: Related Workmentioning
confidence: 99%
“…These works can also be categorized into deterministic (Martinez, Black, and Romero 2017) or stochastic (Liu et al 2021), using Variational Autoencoders (VAEs) (Kingma and Welling 2013) or Generative Adversarial Networks (GANs) (Goodfellow et al 2014) respectively, with the design choice hinging on whether there is sufficient variation to be learnt by the model. Many recent works incorporate additional context such as scene (Corona et al 2020), eye gaze (Razali and Demiris 2022b;Zheng et al 2022), or object coordinates (Razali and Demiris 2022a).…”
Section: Related Workmentioning
confidence: 99%