2023
DOI: 10.3390/s23042036
|View full text |Cite
|
Sign up to set email alerts
|

A Mapless Local Path Planning Approach Using Deep Reinforcement Learning Framework

Abstract: The key module for autonomous mobile robots is path planning and obstacle avoidance. Global path planning based on known maps has been effectively achieved. Local path planning in unknown dynamic environments is still very challenging due to the lack of detailed environmental information and unpredictability. This paper proposes an end-to-end local path planner n-step dueling double DQN with reward-based ϵ-greedy (RND3QN) based on a deep reinforcement learning framework, which acquires environmental data from … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
8
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 13 publications
(8 citation statements)
references
References 37 publications
0
8
0
Order By: Relevance
“…Another journal with important contributions is Sensors from MDPI. In recent years, some papers are [38], which introduces a deep Q-network for using LiDAR data to generate discrete actions; [39], which applied a method called Hindsight Experience Replay to mitigate the sparse reward problem; and [40], which presented a technique for calculating the best path in environments with multiple robots. Their main focus was path planning in wider environments, obtaining more reliable probability tables [41].…”
Section: Investigation Tendenciesmentioning
confidence: 99%
“…Another journal with important contributions is Sensors from MDPI. In recent years, some papers are [38], which introduces a deep Q-network for using LiDAR data to generate discrete actions; [39], which applied a method called Hindsight Experience Replay to mitigate the sparse reward problem; and [40], which presented a technique for calculating the best path in environments with multiple robots. Their main focus was path planning in wider environments, obtaining more reliable probability tables [41].…”
Section: Investigation Tendenciesmentioning
confidence: 99%
“…These solvers have the advantage of handling large POMDPs [ 18 ], but they have the drawback of being slow compared to offline solvers [7] . The Offline solvers can arrive at better policies than online solvers for the sampled belief states, but usually, they fail to scale up to large POMDP [ 19 ]. Recently there have also been successful attempts to use Deep Reinforcement Learning techniques for solving POMDP [ 19 ].…”
Section: Introductionmentioning
confidence: 99%
“…The Offline solvers can arrive at better policies than online solvers for the sampled belief states, but usually, they fail to scale up to large POMDP [ 19 ]. Recently there have also been successful attempts to use Deep Reinforcement Learning techniques for solving POMDP [ 19 ].…”
Section: Introductionmentioning
confidence: 99%
“…There is an increasing emphasis on autonomous robot path planning as robots are used in more and increasingly important applications [ 1 ]. In general, robot path planning algorithms are divided into two categories: traditional methods and machine learning methods [ 2 ]. Traditional methods may require extensive calculations that are difficult to meet real-time requirements or result in locally optimal solutions that fail to produce accurate paths [ 3 ].…”
Section: Introductionmentioning
confidence: 99%