2018
DOI: 10.48550/arxiv.1802.02274
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Critical Investigation of Deep Reinforcement Learning for Navigation

Abstract: The navigation problem is classically approached in two steps: an exploration step, where mapinformation about the environment is gathered; and an exploitation step, where this information is used to navigate efficiently. Deep reinforcement learning (DRL) algorithms, alternatively, approach the problem of navigation in an end-to-end fashion. Inspired by the classical approach, we ask whether DRL algorithms are able to inherently explore, gather and exploit map-information over the course of navigation. We buil… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 8 publications
(12 citation statements)
references
References 9 publications
0
12
0
Order By: Relevance
“…A big challenge with applying learning-based methods to path-planning is non-generalizability; [20] highlighted the difficulty of learned methods to generalize to the full path planning problem where the environment obstacles, start and goal locations are completely randomized. Because of this limitation, hybrid methods are developed to combine learning-based methods with conventional sampling based methods [11] [13].…”
Section: A Learning-based Planningmentioning
confidence: 99%
“…A big challenge with applying learning-based methods to path-planning is non-generalizability; [20] highlighted the difficulty of learned methods to generalize to the full path planning problem where the environment obstacles, start and goal locations are completely randomized. Because of this limitation, hybrid methods are developed to combine learning-based methods with conventional sampling based methods [11] [13].…”
Section: A Learning-based Planningmentioning
confidence: 99%
“…Goal directed visual navigation There has been considerable interest in using Deep Reinforcement Learning (DRL) algorithms for the goal-driven visual navigation of robots (Mirowski et al, 2016(Mirowski et al, , 2017Dhiman et al, 2018;Gupta et al, 2017;Savinov, Dosovitskiy, and Koltun, 2018). Mirowski et al (2016) demonstrate that a DRL algorithm called Asynchronous Advantage Actor Critic (A3C) can learn to find a goal in 3D navigation simulators, using only a front facing first person view as input, while Mirowski et al (2017) demonstrate goal directed navigation in Google's street view graph.…”
Section: Related Workmentioning
confidence: 99%
“…Moving the successes of these works from simulations to the real world is an active area of research because of the high sample complexity of model-free RL algorithms (Zhu et al, 2017;Anderson et al, 2018) . Dhiman et al (2018) empirically evaluate Mirowski et al (2016)'s approach and show that when goal locations are dynamic, the path chosen to reach the goal are often far from optimal. In contrast to our method, these works focus on the navigation domain and employ domain specific auxiliary rewards and data-structures making them less generalizable to other multi-goal tasks.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations