2021
DOI: 10.48550/arxiv.2112.05129
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Assistive Tele-op: Leveraging Transformers to Collect Robotic Task Demonstrations

Abstract: Sharing autonomy between robots and human operators could facilitate data collection of robotic task demonstrations to continuously improve learned models. Yet, the means to communicate intent and reason about the future are disparate between humans and robots. We present Assistive Tele-op, a virtual reality (VR) system for collecting robot task demonstrations that displays an autonomous trajectory forecast to communicate the robot's intent. As the robot moves, the user can switch between autonomous and manual… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 20 publications
0
1
0
Order By: Relevance
“…Transformers for Object Manipulation. The success of transformers in vision and NLP has led its way into robot learning [42,43,44,17]. Especially in object manipulation, transformer-based models with an attention mechanism can be utilized to extract features from sensory inputs to improve policy learning [45,46,47,48,49].…”
Section: Related Workmentioning
confidence: 99%
“…Transformers for Object Manipulation. The success of transformers in vision and NLP has led its way into robot learning [42,43,44,17]. Especially in object manipulation, transformer-based models with an attention mechanism can be utilized to extract features from sensory inputs to improve policy learning [45,46,47,48,49].…”
Section: Related Workmentioning
confidence: 99%