2023
DOI: 10.1007/s10846-023-01819-0
|View full text |Cite
|
Sign up to set email alerts
|

DRL-based Path Planner and its Application in Real Quadrotor with LIDAR

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(7 citation statements)
references
References 17 publications
0
7
0
Order By: Relevance
“…In this section, a UAV simulation training and testing environment is built based on [20] to verify the proposed unified framework. Three sets of experiments are used to independently test the temporal characteristics, spatial characteristics and data denoising included in figure 2 and analyze the results to verify the effectiveness and rationality of proposed framework.…”
Section: Experiments and Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…In this section, a UAV simulation training and testing environment is built based on [20] to verify the proposed unified framework. Three sets of experiments are used to independently test the temporal characteristics, spatial characteristics and data denoising included in figure 2 and analyze the results to verify the effectiveness and rationality of proposed framework.…”
Section: Experiments and Discussionmentioning
confidence: 99%
“…References [29] and [30] both used the AirSim simulation platform to create highly realistic scenes to train agents, but the effort and cost increase as the degree of restoration of the actual scene increases. Different from the ideas in the appealed literature [20], trained the agent through dynamic training scenarios to enhance the UAV's reliability in many application scenarios. This method can reduce the influence of distribution mismatch problems at a lower cost.…”
Section: Relate Workmentioning
confidence: 99%
See 1 more Smart Citation
“…During interactions, the agent receives immediate rewards, using them to evaluate its actions. In [38], Yang compared the SAC algorithm with deep deterministic policy gradients (DDPG) [39] and twin-delayed deep deterministic (TD3) [40] algorithms in the quadrotor obstacle avoidance task. Simulation results show that SAC outperforms DDPG and TD3 in terms of stability and performance.…”
Section: Soft Actor-criticmentioning
confidence: 99%
“…Note that in our approach we mainly consider kinematic aspects to control the motion of the UAV. Compared to [38], we incorporate the following point into the state. As mentioned earlier, sampling-based path planning algorithms are well suited for obtaining a predefined path that the DRL-based planner can follow.…”
Section: Network Structurementioning
confidence: 99%