2022
DOI: 10.2514/1.d0296
|View full text |Cite
|
Sign up to set email alerts
|

Toward Conflict Resolution with Deep Multi-Agent Reinforcement Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
8
2

Relationship

0
10

Authors

Journals

citations
Cited by 13 publications
(9 citation statements)
references
References 17 publications
0
3
0
Order By: Relevance
“…Such algorithms can be used to improve the quality of resolutions, in addition to solving the present conflicts. For instance, Isufaj et al [35] propose a multi-agent reinforcement learning approach to conflict resolution which considers airspace complexity as one of the factors that the model must optimize in addition to solving conflicts. The indicators proposed in this work could allow for a more granular optimization of complexity by providing more detailed information.…”
Section: Discussionmentioning
confidence: 99%
“…Such algorithms can be used to improve the quality of resolutions, in addition to solving the present conflicts. For instance, Isufaj et al [35] propose a multi-agent reinforcement learning approach to conflict resolution which considers airspace complexity as one of the factors that the model must optimize in addition to solving conflicts. The indicators proposed in this work could allow for a more granular optimization of complexity by providing more detailed information.…”
Section: Discussionmentioning
confidence: 99%
“…Furthermore, recent works saw the engagement of DRL methods, which behave better in multiagent environments and consider uncertainties. In this paper [25], the authors model pairwise conflict resolutions as a multiagent reinforcement learning (MARL) problem. They use Multiagent Deep Deterministic Policy Gradient (MADDPG) [26] to train two agents, representing each aircraft in a conflict pair, capable of efficiently solving conflicts in the presence of surrounding traffic by considering heading and speed changes.…”
Section: Related Workmentioning
confidence: 99%
“…In [15] the authors combine Kernel Based RL with deep MARL to resolve conflicts by applying speed changes in real-time, also considering other factors such as fuel consumption and airspace congestion. Authors in [26] use Multi-Agent Deep Deterministic Policy Gradient (MADPG) to resolve conflicts, also considering time, fuel consumption and airspace complexity. Closer to our approach are methods that somehow consider the ATCO preferences, either in a data-driven way as in [27], [28] and [29], or by using rules and procedures derived from human experts as in [30].…”
Section: Related Workmentioning
confidence: 99%