2016 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR) 2016
DOI: 10.1109/ssrr.2016.7784308
|View full text |Cite
|
Sign up to set email alerts
|

Learning assistive teleoperation behaviors from demonstration

Abstract: Abstract-Emergency response in hostile environments often involves remotely operated vehicles (ROVs) that are teleoperated as interaction with the environment is typically required. Many ROV tasks are common to such scenarios and are often recurrent. We show how a probabilistic approach can be used to learn a task behavior model from data. Such a model can then be used to assist an operator performing the same task in future missions. We show how this approach can capture behaviors (constraints) that are prese… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

1
13
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
4
4

Relationship

3
5

Authors

Journals

citations
Cited by 12 publications
(14 citation statements)
references
References 12 publications
1
13
0
Order By: Relevance
“…Recently [8] demonstrated how virtual fixtures can be learned from data and how new fixtures can be added to the system so that it can adapt to new manipulation examples. In our recent work in shared control [9], we showed how a teleoperation system can be designed to rely on learned probabilistic models of manipulation tasks in combination with online operator's input.…”
Section: Motivation and Related Workmentioning
confidence: 99%
“…Recently [8] demonstrated how virtual fixtures can be learned from data and how new fixtures can be added to the system so that it can adapt to new manipulation examples. In our recent work in shared control [9], we showed how a teleoperation system can be designed to rely on learned probabilistic models of manipulation tasks in combination with online operator's input.…”
Section: Motivation and Related Workmentioning
confidence: 99%
“…Recently, methods have been proposed to assist humans in co-manipulation and teleoperation tasks given demonstrated trajectories (Raiola et al, 2015 ; Havoutis and Calinon, 2016 , 2017 ). Our work contributes to this field by providing a new reinforcement learning algorithm, P earson-Correlation-Based R elevance Weighted Policy O ptimization (PRO), to improve upon demonstrated trajectories when these are suboptimal or when solutions to new situations must be found.…”
Section: Introductionmentioning
confidence: 99%
“…A probabilistic method was developed to construct a task model in the assistive teleoperated operation. The performance of the teleoperation was improved by the method and demonstrated by remotely operated vehicle (ROV) tasks [26,27]. In order to simplify the complexity of the system and to cope up with the varying dynamical interaction, Huang et al [28] developed a hierarchical interactive learning (HIL) algorithm with dynamic movement primitives (DMPs) and locally weighted regression (LWR) to learn the task trajectories for an exoskeleton system.…”
Section: Introductionmentioning
confidence: 99%