2019 4th Asia-Pacific Conference on Intelligent Robot Systems (ACIRS) 2019
DOI: 10.1109/acirs.2019.8935944
|View full text |Cite
|
Sign up to set email alerts
|

Deep Reinforcement Learning for Mobile Robot Navigation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(6 citation statements)
references
References 4 publications
0
6
0
Order By: Relevance
“…The robot finally gets the optimal policy to achieve the goal by repeating the learning process. In order to use DRL in an E2E robot navigation context, the whole problem setting must be stated and translated into an RL framework [94]. For instance, to avoid collisions, Long et al [93] proposed directly mapping raw 2D laser measurements to desired motion commands using a 4-hidden-layer neural network.…”
Section: Deep Reinforcement Learning and End-to-end Approachesmentioning
confidence: 99%
“…The robot finally gets the optimal policy to achieve the goal by repeating the learning process. In order to use DRL in an E2E robot navigation context, the whole problem setting must be stated and translated into an RL framework [94]. For instance, to avoid collisions, Long et al [93] proposed directly mapping raw 2D laser measurements to desired motion commands using a 4-hidden-layer neural network.…”
Section: Deep Reinforcement Learning and End-to-end Approachesmentioning
confidence: 99%
“…Indeed, DRL shows its powerful abilities in the field of autonomous driving, as discussed in [18] and [19]. It has also brought new and promising solutions to traffic control issues [20], [21]. More details about the DRL controller design are given in Section III.…”
Section: B Practical Considerationsmentioning
confidence: 99%
“…Also, we can explain the chattering upon arriving at the equilibrium of USMC-RL. is is because of the continuity of the model-it cannot jump from a very high output (20) to a very small output (2.5). erefore, when it should produce a small output, it is still on its way down.…”
Section: Constant Value Target Figures 8-19 Show the Simulation Resul...mentioning
confidence: 99%
“…Reinforcement learning (RL) is a model-free methodology that optimizes its action on large-scale, complex problems through exploration and exploitation without explicit models [19]. Recently, with the development of deep learning, RL has been combined with deep neural networks to solve many control problems [20][21][22]. Actorcritic learning is one popular framework of RL.…”
Section: Introductionmentioning
confidence: 99%