2021
DOI: 10.1007/978-3-030-89188-6_12
|View full text |Cite
|
Sign up to set email alerts
|

A Dueling-DDPG Architecture for Mobile Robots Path Planning Based on Laser Range Findings

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 25 publications
0
4
0
Order By: Relevance
“…Based on this research, Jesus et al [ 27 ] applied DDPG to the navigation of mobile robots in a virtual environment. Zhao et al [ 28 ] proposed the D‐DDPG, which integrates a dueling network into the critic network to improve the estimation accuracy of the Q ‐value. Gong et al [ 29 ] and Zhou et al [ 30 ] introduced long short‐term memory (LSTM) into the DDPG to achieve long‐term capability in mapless navigation.…”
Section: Related Studiesmentioning
confidence: 99%
“…Based on this research, Jesus et al [ 27 ] applied DDPG to the navigation of mobile robots in a virtual environment. Zhao et al [ 28 ] proposed the D‐DDPG, which integrates a dueling network into the critic network to improve the estimation accuracy of the Q ‐value. Gong et al [ 29 ] and Zhou et al [ 30 ] introduced long short‐term memory (LSTM) into the DDPG to achieve long‐term capability in mapless navigation.…”
Section: Related Studiesmentioning
confidence: 99%
“…The results [25] revealed that the adopted method was effective in complex environments such as small offices and warehouses, which are typical. The results reported in [24,25] demonstrated that designing a dense reward system suitable for each complex environment is often difficult. Another study [26] implemented path planning for mobile robots in various environments using Mixed Noise-LSTM-DDPG (MN-LSTM-DDPG).…”
Section: Introductionmentioning
confidence: 97%
“…Another study [25] involved path planning for mobile robots in various environments using a dueling DDPG architecture and a dense reward system. The results [25] revealed that the adopted method was effective in complex environments such as small offices and warehouses, which are typical. The results reported in [24,25] demonstrated that designing a dense reward system suitable for each complex environment is often difficult.…”
Section: Introductionmentioning
confidence: 99%
“…Deep learning has apparent advantages in solving complex tasks and processing high-dimensional data, but it has a limitation in capturing dynamic user preferences timely [2]. Reinforcement learning algorithms can simulate complex user-item interaction processes, and have been successfully applied in recommendation systems and path planning [3]. Deep reinforcement learning algorithms combine the benefits of deep learning and reinforcement learning and are usually used to construct high-dimensional and continuous action space models.…”
Section: Introductionmentioning
confidence: 99%