2018
DOI: 10.48550/arxiv.1802.01744
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Shared Autonomy via Deep Reinforcement Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
37
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
8
1

Relationship

2
7

Authors

Journals

citations
Cited by 27 publications
(37 citation statements)
references
References 0 publications
0
37
0
Order By: Relevance
“…Virtual reality [7,32] can help, but the human effort remains considerable for complex tasks, and when many demonstrations are required. Shared autonomy [9,12,14,25] offers a better solution to collecting large-scale data. These works blend robot and user intent using optimization [9], reinforcement learning [25], and learned coarse-to-fine user precision [14], while ours lets the user look far into the future to understand the autonomous prediction.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Virtual reality [7,32] can help, but the human effort remains considerable for complex tasks, and when many demonstrations are required. Shared autonomy [9,12,14,25] offers a better solution to collecting large-scale data. These works blend robot and user intent using optimization [9], reinforcement learning [25], and learned coarse-to-fine user precision [14], while ours lets the user look far into the future to understand the autonomous prediction.…”
Section: Related Workmentioning
confidence: 99%
“…Shared autonomy [9,12,14,25] offers a better solution to collecting large-scale data. These works blend robot and user intent using optimization [9], reinforcement learning [25], and learned coarse-to-fine user precision [14], while ours lets the user look far into the future to understand the autonomous prediction. A similar forecasting method was proposed by Liu et al [17], but it is used in a behavior cloning loss function, rather than for communicating intent to the user.…”
Section: Related Workmentioning
confidence: 99%
“…In recent years, some model-free RLSC methods have been proposed. The earliest work is [19], where the agent maximizes a combination of task performance and user feedback rewards using Deep-Q learning. In [20], the approach is extended to maximize human-control authority using residual policy learning.…”
Section: B Reinforcement Learning For Shared Controlmentioning
confidence: 99%
“…Moreover, the classifiers are trained to predict the labels of all samples in the training set as in full automation. The extensive body of work on human-machine collaboration has predominantly considered settings in which the machine and the human interact with each other [5,16,17,18,19,25,30,34,35,39,42,48,50,51,52,54]. In this context, our work is more closely connected to a line of work that studies switching behavior and switching costs in the context of human-computer interaction [7,20,22,24,26], which we see as complementary.…”
Section: Introductionmentioning
confidence: 99%