2021
DOI: 10.48550/arxiv.2108.10634
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Learning to Arbitrate Human and Robot Control using Disagreement between Sub-Policies

Abstract: In the context of teleoperation, arbitration refers to deciding how to blend between human and autonomous robot commands. We present a reinforcement learning solution that learns an optimal arbitration strategy that allocates more control authority to the human when the robot comes across a decision point in the task. A decision point is where the robot encounters multiple options (sub-policies), such as having multiple paths to get around an obstacle or deciding between two candidate goals. By expressing each… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 22 publications
(29 reference statements)
0
1
0
Order By: Relevance
“…Because these methods are model free, knowledge of environment dynamics is no longer required, allowing one to train a policy that is not limited to a specific model class. Several follow up works have adapted deep RL to a variety of shared autonomy problems [16,40,36,9].…”
Section: Related Workmentioning
confidence: 99%
“…Because these methods are model free, knowledge of environment dynamics is no longer required, allowing one to train a policy that is not limited to a specific model class. Several follow up works have adapted deep RL to a variety of shared autonomy problems [16,40,36,9].…”
Section: Related Workmentioning
confidence: 99%