2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2018
DOI: 10.1109/iros.2018.8593725
|View full text |Cite
|
Sign up to set email alerts
|

Game-Theoretic Cooperative Lane Changing Using Data-Driven Models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
12
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 21 publications
(14 citation statements)
references
References 22 publications
0
12
0
Order By: Relevance
“…In autonomous driving, game theory has been applied to the problem of lane changing and merging [18]. Deep reinforcement learning was used to solve two player merging games in [17], which relied on defining drivers as proactive or passive. Roles for two players were also used in [32], which modelled lane changing as a Stackleberg game that has implicit leaders and followers.…”
Section: B Game Theory For Roboticsmentioning
confidence: 99%
See 1 more Smart Citation
“…In autonomous driving, game theory has been applied to the problem of lane changing and merging [18]. Deep reinforcement learning was used to solve two player merging games in [17], which relied on defining drivers as proactive or passive. Roles for two players were also used in [32], which modelled lane changing as a Stackleberg game that has implicit leaders and followers.…”
Section: B Game Theory For Roboticsmentioning
confidence: 99%
“…Recently, game theory has attracted increasing attention in coordinating robots without communication [16]. Game theoretic methods have been developed to identify actions in the context of their effect on teammates and adversaries for two-player applications such as driving [17], [18], or racing [19], as well as larger games in known environments for surveillance [20]. However, they have not been previously utilized to address the problem of navigating a variable size multi-robot system through a complex environment to a goal position.…”
Section: Introductionmentioning
confidence: 99%
“…However, there is a promising line of research with game-theoretic RL: policy updates can be based on stochastic game equilibria, with the goal of improving state-value function estimates and reducing learning instability. This has been demonstrated as an effective solution that incorporates agent-agent interaction in multi-agent RL [18]- [20], and these approaches are seeing increasing interest across different communities [21]- [23].…”
Section: Related Workmentioning
confidence: 99%
“…Aside from the above examples, the existing literature in game-theoretic motion planning and modeling for multiple competing agents typically are either for discrete state or action spaces [6]- [8], or are specifically for pursuit-evasion style games. In [9], a hierarchical reasoning game theory approach is used for interactive driver modeling; [10] predicts the motion of vehicles in a model-based, intention-aware framework using an extensive-form game formulation; [11], [12] considers Nash and Stackelberg equilibria in a two-car racing game where the action space of both players are finite and the game is formulated in bimatrix form.…”
Section: Related Workmentioning
confidence: 99%