2017 IEEE International Conference on Robotics and Automation (ICRA) 2017
DOI: 10.1109/icra.2017.7989037
|View full text |Cite
|
Sign up to set email alerts
|

Decentralized non-communicating multiagent collision avoidance with deep reinforcement learning

Abstract: Abstract-Finding feasible, collision-free paths for multiagent systems can be challenging, particularly in noncommunicating scenarios where each agent's intent (e.g. goal) is unobservable to the others. In particular, finding time efficient paths often requires anticipating interaction with neighboring agents, the process of which can be computationally prohibitive. This work presents a decentralized multiagent collision avoidance algorithm based on a novel application of deep reinforcement learning, which eff… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
364
0
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
2
1

Relationship

2
7

Authors

Journals

citations
Cited by 473 publications
(393 citation statements)
references
References 22 publications
0
364
0
1
Order By: Relevance
“…Current stateof-the-art optimal planners can plan for several hundreds of agents, and the community is now settling for bounded suboptimal planners as a potential solution for even larger multi-agent systems [3], [9]. Another common approach is to rely on reactive planners, which do not plan joint paths for all agents before execution, but rather correct individual paths online to avoid collisions [5], [10]. However, such planners often prove inefficient in cluttered factory environments (such as Fig.…”
Section: Introductionmentioning
confidence: 99%
“…Current stateof-the-art optimal planners can plan for several hundreds of agents, and the community is now settling for bounded suboptimal planners as a potential solution for even larger multi-agent systems [3], [9]. Another common approach is to rely on reactive planners, which do not plan joint paths for all agents before execution, but rather correct individual paths online to avoid collisions [5], [10]. However, such planners often prove inefficient in cluttered factory environments (such as Fig.…”
Section: Introductionmentioning
confidence: 99%
“…This RL framework applies a reward function, R col s jn , u , to penalize the agent in case of collision, and reward in case of reaching its goal. Two different types of RL algorithms are used in this RL framework, value-based [22], [15] and policybased [14] learning. Value-based algorithm assumes that other agents continue their current velocities until next step, ∆t, to be able to extract policy from the value function, V s jn t .…”
Section: A Collision Avoidance With Deep Rl (Ga3c-cadrl)mentioning
confidence: 99%
“…In addition, a study by Namazi et al [34] shows that traditional machine learning-based solutions are not suitable for a complex and dynamic environment such as autonomous driving. Leveraging deep learning especially the Convolutional Neural Networks (CNNs), Lv et al [35] handled collision avoidance by predicting the traffic flow while Chen et al [10] utilized DRL with multi-agents settings to avoid collisions. In addition, Cheng et al [36] formulated an automated enemy avoidance problem with Markov Decision Process and resolved it with temporal-difference reinforcement learning.…”
Section: A Collision Avoidancementioning
confidence: 99%
“…The high computational burden of an optimization-based centralized scheme makes the deployment of the control system on real platforms challenging. On the other hand, Chen et al [10] developed a decentralized multi-agent collision avoidance algorithm where two agents were simulated to navigate toward their own goal positions and learn a value network that encodes the expected time to goal. However, cooperative information among robots is not accounted for in the solution and the design is not suitable for high speed scenarios.…”
Section: B Multi-agent Collision Avoidancementioning
confidence: 99%