The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2021
DOI: 10.1109/access.2021.3076530
|View full text |Cite
|
Sign up to set email alerts
|

Motion Planning for Mobile Robots—Focusing on Deep Reinforcement Learning: A Systematic Review

Abstract: Mobile robots contributed significantly to the intelligent development of human society, and the motion-planning policy is critical for mobile robots. This paper reviews the methods based on motionplanning policy, especially the ones involving Deep Reinforcement Learning (DRL) in the unstructured environment. The conventional methods of DRL are categorized to value-based, policy-based and actorcritic-based algorithms, and the corresponding theories and applications are surveyed. Furthermore, the recently-emerg… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
23
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 62 publications
(43 citation statements)
references
References 105 publications
0
23
0
Order By: Relevance
“…With scenarios containing moving elements, in long-range (global path planning) scenarios, the use of Evolutionary methods is adequate. The latest Artificial Intelligence methods, including the DL and RL methods, still need to be further studied to obtain solid conclusions, as also remarked by Sun et al [ 21 ]. Artificial Intelligence methods based on Fuzzy rules or neural networks can be used for fast Local Planning as an alternative to Reactive Manoeuvre methods.…”
Section: Summary and Conclusionmentioning
confidence: 96%
See 1 more Smart Citation
“…With scenarios containing moving elements, in long-range (global path planning) scenarios, the use of Evolutionary methods is adequate. The latest Artificial Intelligence methods, including the DL and RL methods, still need to be further studied to obtain solid conclusions, as also remarked by Sun et al [ 21 ]. Artificial Intelligence methods based on Fuzzy rules or neural networks can be used for fast Local Planning as an alternative to Reactive Manoeuvre methods.…”
Section: Summary and Conclusionmentioning
confidence: 96%
“…Faust et al [ 158 ] combined RL with the Probabilistic Roadmap Method (PRM), which is one of the algorithms detailed next. For more information about planning algorithms based on RL, refer to the work of Sun et al [ 21 ].…”
Section: Soft-computing-based Path Planning Algorithmsmentioning
confidence: 99%
“…A jointly learnable behavior and trajectory planner for self-driving vehicles was introduced in [31]. Unlike the majority of neural planning methods that rely on paths demonstrated by experts, we apply a deep reinforcement learning approach [4], thus conserving both the time and human effort in the training phase.…”
Section: Related Workmentioning
confidence: 99%
“…Here machine learning comes to the rescue, as modern methods, like deep neural networks (DNN), make it possible to learn even complicated decision-making policies in constrained state spaces [4]. We have explored this idea in our recent paper [5], where we presented a neural network architecture and a training procedure that allow a local motion planner to learn from its own experience (Fig.…”
Section: Introductionmentioning
confidence: 99%
“…The above-mentioned methods improved the disadvantages of DRL in unstructured environments, enhancing the performance of the models and improving training efficiency. The DRL-based tasks were performed by calculating cumulative rewards to obtain an optimal policy model, which would have a better performance when a large amount of high-value training samples were available [26]. However, for the fruit-picking task, there were too few valid samples at the beginning of the training due to the randomness and uncertainty of the target fruit and obstacle locations.…”
Section: Introductionmentioning
confidence: 99%