2021
DOI: 10.1155/2021/6964875
|View full text |Cite
|
Sign up to set email alerts
|

Multi-UAV Collaborative Path Planning Method Based on Attention Mechanism

Abstract: Aiming at the problem that traditional heuristic algorithm is difficult to extract the empirical model in time from large sample terrain data, a multi-UAV collaborative path planning method based on attention reinforcement learning is proposed. The method draws on a combined consideration of influencing factors, such as survival probability, path length, and load balancing and endurance constraints, and works as a support system for multimachine collaborative optimizing. The attention neural network is used to… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(7 citation statements)
references
References 13 publications
(20 reference statements)
0
6
0
Order By: Relevance
“…C. Xia and A. Yudi [120] Trajectory Dynamic step size, Adaptive learning NN 3D G. Sanna et al [121] Trajectory Supervised learning ANN + A* 2D W. Luo et al [122] Environment Multi-agent Deep-Sarsa 3D H. Qie et al [123] Environment STATP MADDPG 2D C. Zhao et al [124] coverage problem Adaptive, Information sharing DMUCRL 2D T. Wang et al [125] Environment Attention network AM 2D Q. Liu et al [126] Environment, Trajectory SSA, B-spline curve BINN 3D L. Wang et al [127] MEC Low complexity MADDPG 2D W. Zhang et al [128] Communication CMDP cDQN 3D H. Bayerlein et al [129] Collect Data Dec-POMDP DDQN 2D S. Tianle et al [130] Time Note dynamic clustering PSO + IA-DRL 2D…”
Section: Reference Challenge Optimization Criteria Methods Dimensionmentioning
confidence: 99%
See 1 more Smart Citation
“…C. Xia and A. Yudi [120] Trajectory Dynamic step size, Adaptive learning NN 3D G. Sanna et al [121] Trajectory Supervised learning ANN + A* 2D W. Luo et al [122] Environment Multi-agent Deep-Sarsa 3D H. Qie et al [123] Environment STATP MADDPG 2D C. Zhao et al [124] coverage problem Adaptive, Information sharing DMUCRL 2D T. Wang et al [125] Environment Attention network AM 2D Q. Liu et al [126] Environment, Trajectory SSA, B-spline curve BINN 3D L. Wang et al [127] MEC Low complexity MADDPG 2D W. Zhang et al [128] Communication CMDP cDQN 3D H. Bayerlein et al [129] Collect Data Dec-POMDP DDQN 2D S. Tianle et al [130] Time Note dynamic clustering PSO + IA-DRL 2D…”
Section: Reference Challenge Optimization Criteria Methods Dimensionmentioning
confidence: 99%
“…Wang et al [125] proposed a collaborative trajectory planning method for multiple UAVs based on attentional reinforcement learning. This method uses a neural network with an attention mechanism to generate a UAV cooperative reconnaissance strategy (AM) and uses a reinforcement algorithm to test a large amount of simulation data and optimize the attention network.…”
Section: Reinforce Learningmentioning
confidence: 99%
“…Splitting the task of collaborative exploration into two core parts, the first part is sensor-based, and the second part is the path planner assigns actionable tasks to each agent, including the ability to provide reachable collision-free paths and multiple simulations in complex location environments. Wang et al [48] proposed a multi-UAV collaborative path planning method based on reinforcement learning to solve the problem that traditional heuristic algorithms are difficult to extract empirical models from large sample terrain data in time, which comprehensively considers the influencing factors such as survival probability, path length, load balancing and durability constraints. Hao et al [49] proposed a dynamic fast Q-learning (DFQL) algorithm for the path planning problem of USV in some known marine environments, which combines Q-learning with artificial potential field (APF) to initialize the Q table and provides USV with prior knowledge from the environment.…”
Section: Related Workmentioning
confidence: 99%
“…The cooperative path planning method for multiple UAVs in a deterministic environment can be regarded as an NP-hard combinatorial optimization problem with many constraints [10]. In recent years, many scholars have conducted extensive research on how to establish mathematical models and solve the NP-hard combinatorial optimization problem [11][12][13][14][15][16][17][18]. The main mathematical models include the multitrip salesman model (MTSP), mixed linear integer programming model (MILP), and vehicle scheduling and path planning model (VRP).…”
Section: Related Workmentioning
confidence: 99%