2016
DOI: 10.1109/tro.2016.2597322
|View full text |Cite
|
Sign up to set email alerts
|

A Framework of Human–Robot Coordination Based on Game Theory and Policy Iteration

Abstract: Abstract-In this paper, we propose a framework to analyze the interactive behaviors of human and robot in physical interactions. Game theory is employed to describe the system under study, and policy iteration is adopted to provide a solution of Nash equilibrium. The human's control objective is estimated based on the measured interaction force, and it is used to adapt the robot's objective such that human-robot coordination can be achieved. The validity of the proposed method is verified through a rigorous pr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
60
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
6
2

Relationship

2
6

Authors

Journals

citations
Cited by 93 publications
(64 citation statements)
references
References 38 publications
(54 reference statements)
0
60
0
Order By: Relevance
“…For scenarios where the task has few features (i.e., two to three), global replanning methods have already been implemented online [4,5]: thus, in practice, the time spent replanning may not be significant enough to warrant our alternate approach. Finally, as we demonstrated within the worst-case simulations, shared and optimal controllers that do not learn the correct trajectory are often sufficient when the hypothesis space is largely incorrect [12,20,25,30,31]. Our experimental results should therefore be interpreted as useful trends-demonstrating that LQR+L is beneficial in the right contexts-rather than claiming that this method is better for all cases.…”
Section: Discussionmentioning
confidence: 60%
See 2 more Smart Citations
“…For scenarios where the task has few features (i.e., two to three), global replanning methods have already been implemented online [4,5]: thus, in practice, the time spent replanning may not be significant enough to warrant our alternate approach. Finally, as we demonstrated within the worst-case simulations, shared and optimal controllers that do not learn the correct trajectory are often sufficient when the hypothesis space is largely incorrect [12,20,25,30,31]. Our experimental results should therefore be interpreted as useful trends-demonstrating that LQR+L is beneficial in the right contexts-rather than claiming that this method is better for all cases.…”
Section: Discussionmentioning
confidence: 60%
“…Optimal control. Other research studies optimal control strategies for tasks that involve physical human interaction [20,25,30]. In Li et al [25], the authors leverage game theory and optimal control to identify the correct robot behavior: like Medina et al [30], their approach reduces the robot's stiffness when the human applies forces and torques but increases the rendered stiffness when the human does not interact, or when the human's interactions agree with the robot's prediction.…”
Section: Control Strategies For Physical Human Interactionmentioning
confidence: 99%
See 1 more Smart Citation
“…To adjust the robot's role to lead or to follow according to the human's intention, game theory was employed for fundamental analysis of humanrobot interaction and an adaptation law was developed in [106]. Policy iteration combining with NN was adopted to provide a rigorous solution to the problem of the system equilibrium in human-robot interaction [107].…”
Section: Nn Based Human-robot Interaction Controlmentioning
confidence: 99%
“…The significance of adjustable leader/follower roles for shared control has been emphasized in a recent review in the field of human-robot interaction [14], and there are several works in this direction [15], [16], [17]. Such an human-robot system is formulated as a two-agent system with one leader and one follower.…”
Section: Introductionmentioning
confidence: 99%