2023
DOI: 10.3390/drones7050311
|View full text |Cite
|
Sign up to set email alerts
|

A Hybrid Human-in-the-Loop Deep Reinforcement Learning Method for UAV Motion Planning for Long Trajectories with Unpredictable Obstacles

Abstract: Unmanned Aerial Vehicles (UAVs) can be an important component in the Internet of Things (IoT) ecosystem due to their ability to collect and transmit data from remote and hard-to-reach areas. Ensuring collision-free navigation for these UAVs is crucial in achieving this goal. However, existing UAV collision-avoidance methods face two challenges: conventional path-planning methods are energy-intensive and computationally demanding, while deep reinforcement learning (DRL)-based motion-planning methods are prone t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 11 publications
(4 citation statements)
references
References 58 publications
(108 reference statements)
0
4
0
Order By: Relevance
“…Ref. [36] took advantage of the traditional collision avoidance method and DRL method, resulting in long trajectory planning with unknown obstacles. In [37], UAVs serve in a warehouse for stock inventory, updating real-time paths with image recording.…”
Section: Related Workmentioning
confidence: 99%
“…Ref. [36] took advantage of the traditional collision avoidance method and DRL method, resulting in long trajectory planning with unknown obstacles. In [37], UAVs serve in a warehouse for stock inventory, updating real-time paths with image recording.…”
Section: Related Workmentioning
confidence: 99%
“…A collision-free, smooth, and dynamically feasible trajectory guarantees flight safety, which is generated by a motion planning module in working scenarios. Motion planning is generally divided into two parts: front-end discrete path search and back-end continuous trajectory optimization, aiming at generating a reference trajectory that satisfies the above three basic conditions for the controller to track [3,4]. Since quadrotor UAVs with small fuselages have limited computational resources of onboard computers, a major research objective is how to fully utilize the limited resources to generate front-end paths and back-end trajectories.…”
Section: Introductionmentioning
confidence: 99%
“…UAVs realize autonomous positioning through the information fusion of an airborne visual sensor and IMU, and then realize autonomous navigation capabilities such as obstacle-avoidance flight partly through the airborne autonomous path planning algorithm. For example, reference [ 18 ] realized obstacle avoidance of a UAV in a dynamic environment based on point-cloud image; reference [ 19 ] adopted deep reinforcement learning to realize an end-to-end obstacle-avoidance decision of UAVs. The advantage of navigation decision planning based on airborne autonomous sensors and processors is that the real-time requirement can be satisfied.…”
Section: Introductionmentioning
confidence: 99%