2023
DOI: 10.1109/tvt.2022.3218855
|View full text |Cite
|
Sign up to set email alerts
|

A UAV Navigation Approach Based on Deep Reinforcement Learning in Large Cluttered 3D Environments

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 16 publications
(6 citation statements)
references
References 39 publications
0
1
0
Order By: Relevance
“…In recent research, feature-transfer-based DRL methods have demonstrated enhanced perception efficiency and superior navigation performance by using clearer and more informative inputs, such as visual labels or geometric data, for the UAV's state. This is shown in studies [31] and [32], where UAVs can directly determine the relative pose between the target and itself, as well as the collision distance with surrounding obstacles, using pure geometric data. However, precise and ideal geometric data pose considerable challenges for real-world applications, exhibiting a significant reality gap due to the presence of noises and imperfections.…”
Section: B Feature-transfer-based Methods For Agent Navigationmentioning
confidence: 99%
“…In recent research, feature-transfer-based DRL methods have demonstrated enhanced perception efficiency and superior navigation performance by using clearer and more informative inputs, such as visual labels or geometric data, for the UAV's state. This is shown in studies [31] and [32], where UAVs can directly determine the relative pose between the target and itself, as well as the collision distance with surrounding obstacles, using pure geometric data. However, precise and ideal geometric data pose considerable challenges for real-world applications, exhibiting a significant reality gap due to the presence of noises and imperfections.…”
Section: B Feature-transfer-based Methods For Agent Navigationmentioning
confidence: 99%
“…Long Trajectory Maze Dynamic Environment [18,19] [ [20][21][22][23][24] [27, 29,35] [ 28,38], [30,33,36,37] [ 41,42,50] this work…”
Section: Efficientmentioning
confidence: 99%
“…As a result, in recent years, various DRL-based obstacle-avoidance solutions [27][28][29][30][31][32][33][34][35][36][37][38] have gotten a lot of attention due to their real-time, kinodynamic, and energy-efficient features. DRL is a computational approach for learning how to map states to actions to obtain the optimized policy [39].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Although neural-network-based methods may be less explainable, they are still preferred, since they do not require rigorous mathematical proofs or tedious theoretical analyses. In the study of Xue et al [21], seven ranging sensors were used for the perception of the environment, and a reinforcement learning approach based on an actor-critic framework was used to the achieve autonomous navigation of the UAV in an unknown environment. Similarly, in Zhang et al's study [22], more than seven laser ranging sensors were used to sense the environment, and an improved TD3-based algorithm was used to realize an autonomous navigation task for a UAV in a multi-obstacle environment.…”
Section: Introductionmentioning
confidence: 99%