2018 IEEE International Conference on Robotics and Automation (ICRA) 2018
DOI: 10.1109/icra.2018.8461138
|View full text |Cite
|
Sign up to set email alerts
|

Joint Multi-Policy Behavior Estimation and Receding-Horizon Trajectory Planning for Automated Urban Driving

Abstract: When driving in urban environments, an autonomous vehicle must account for the interaction with other traffic participants. It must reason about their future behavior, how its actions affect their future behavior, and potentially consider multiple motion hypothesis. In this paper we introduce a method for joint behavior estimation and trajectory planning that models interaction and multi-policy decisionmaking. The method leverages Partially Observable Markov Decision Processes to estimate the behavior of other… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
22
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2
2

Relationship

1
8

Authors

Journals

citations
Cited by 38 publications
(23 citation statements)
references
References 15 publications
0
22
0
Order By: Relevance
“…The authors of [128] use Partially Observable Markov Decision Processes (POMDPs) for behavior prediction and nonlinear receding horizon control, or model predictive control, for trajectory planning. The POMDP models the interactions between the ego vehicle and the obstacles.…”
Section: Methods Using Stochastic Techniquesmentioning
confidence: 99%
“…The authors of [128] use Partially Observable Markov Decision Processes (POMDPs) for behavior prediction and nonlinear receding horizon control, or model predictive control, for trajectory planning. The POMDP models the interactions between the ego vehicle and the obstacles.…”
Section: Methods Using Stochastic Techniquesmentioning
confidence: 99%
“…Yet, solving a POMDP online can become infeasible if the right assumptions on the state, action and observation space are not made. For instance, [25] proposed to use Monte Carlo Tree Search (MCTS) algorithms to obtain an approximate optimal solution online and [26] improved the interaction modeling by proposing to feedback the vehicle commands into planning. These methods demonstrated promising results but are limited to environments for which they were specifically designed, demand high computational power and can only consider a discrete set of actions.…”
Section: B Search-based Methodsmentioning
confidence: 99%
“…Hardy and Campbell [11] Passive Bandyopadhyay et al [20] Active Xu et al [21] None Zhan et al [12] Passive Sadigh et al [15] None Galceran et al [13] Active Schmerling et al [16] None Rhinehart et al [5] None Zhou et al [22] None Fisac et al [14] Active Zeng et al [6] None Tang and Salakhutdinov [17] None Cui et al [23] Passive Bajcsy et al [24] Passive…”
Section: Related Workmentioning
confidence: 99%