2020
DOI: 10.48550/arxiv.2005.03210
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Shared Autonomy with Learned Latent Actions

Abstract: Assistive robots enable people with disabilities to conduct everyday tasks on their own. However, these tasks can be complex, containing both coarse reaching motions and fine-grained manipulation. For example, when eating, not only does one need to move to the correct food item, but they must also precisely manipulate the food in different ways (e.g., cutting, stabbing, scooping). Shared autonomy methods make robot teleoperation safer and more precise by arbitrating user inputs with robot controls. However, th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 15 publications
0
3
0
Order By: Relevance
“…In order for EVLP agents to perform complex tasks that require dynamic interaction with other agents, the underlying simulation environments and task structures must support the desired interaction settings. Primarily, environments must enable the representation of others' anticipated actions, mental state, and previous behaviour, which has been shown to be critical in related areas, such as social navigation (Tsai and Oh, 2020;Vemula et al, 2018;Mavrogiannis et al, 2021), natural language processing (Fried et al, 2018a(Fried et al, , 2021Zhu et al, 2021a), and human-machine in-teraction (Newman et al, 2018;Jeon et al, 2020;Charalampous et al, 2017). We highlight these related fields as inspiration for this new direction in EVLP research.…”
Section: Social Interactionmentioning
confidence: 95%
“…In order for EVLP agents to perform complex tasks that require dynamic interaction with other agents, the underlying simulation environments and task structures must support the desired interaction settings. Primarily, environments must enable the representation of others' anticipated actions, mental state, and previous behaviour, which has been shown to be critical in related areas, such as social navigation (Tsai and Oh, 2020;Vemula et al, 2018;Mavrogiannis et al, 2021), natural language processing (Fried et al, 2018a(Fried et al, , 2021Zhu et al, 2021a), and human-machine in-teraction (Newman et al, 2018;Jeon et al, 2020;Charalampous et al, 2017). We highlight these related fields as inspiration for this new direction in EVLP research.…”
Section: Social Interactionmentioning
confidence: 95%
“…A series of works study this problem based on the Atari Game [48,52,8,31]. In addition, the human-AI shared control system is built on autonomous vehicle [19], robotics arm [27] and in multi-agent setting [31]. The reported results on various tasks reflect the effectiveness and efficiency of training and coordination with human-assistive AI.…”
Section: Related Workmentioning
confidence: 99%
“…While the method proposed in [2] effectively learns a low dimensional latent action space and provides a low dimensional control interface, e.g., 1-DoF latent action, however, the method excessively restricts the freedom of user where the participants do not clearly prefer the proposed method over the conventional teleoperation method due to the limited freedom in control. In [3], Jeon et al have focused Fig. 1: An overview of the proposed teleoperation framework where the blue colored boxes and arrow indicate the user's command.…”
Section: Related Work a Shared Autonomy With Latent Modelsmentioning
confidence: 99%