2019
DOI: 10.1109/lra.2019.2925731
|View full text |Cite
|
Sign up to set email alerts
|

Deep Visual MPC-Policy Learning for Navigation

Abstract: Humans can routinely follow a trajectory defined by a list of images/landmarks. However, traditional robot navigation methods require accurate mapping of the environment, localization, and planning. Moreover, these methods are sensitive to subtle changes in the environment. In this paper, we propose a Deep Visual MPC-policy learning method that can perform visual navigation while avoiding collisions with unseen objects on the navigation path. Our model PoliNet takes in as input a visual trajectory and the imag… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
78
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 78 publications
(78 citation statements)
references
References 44 publications
0
78
0
Order By: Relevance
“…The main advantage of Gibson V1 is that it generates photo-realistic virtual images for the agent. This enabled seamless sim2real transfer [14], [38]. However, Gibson V1 cannot be used as a test bed for Interactive Navigation because neither the rendering nor the assets (hundreds of 3D photo-realistic models reconstructed from real-world environments) allow for changes in the state of the environment.…”
Section: Interactive Gibson Environmentmentioning
confidence: 99%
See 1 more Smart Citation
“…The main advantage of Gibson V1 is that it generates photo-realistic virtual images for the agent. This enabled seamless sim2real transfer [14], [38]. However, Gibson V1 cannot be used as a test bed for Interactive Navigation because neither the rendering nor the assets (hundreds of 3D photo-realistic models reconstructed from real-world environments) allow for changes in the state of the environment.…”
Section: Interactive Gibson Environmentmentioning
confidence: 99%
“…These environments have the desired photo and layout realism, and provide sufficient scene complexity. They have enabled the development and benchmarking of learning-based navigation algorithms, and some have allowed relatively easy deployment of such algorithms on real robots [13], [14]. Most of these established simulators, however, fall short of providing interactivity -scans of real worlds are static, and objects cannot be manipulated.…”
Section: Introductionmentioning
confidence: 99%
“…Several works [4,5,6,17] for navigation have been developed using these methods to learn policies. Sometimes, these methods combine visual information to improve predictions like PoliNet [15]. Another method [9] combines Probabilistic Roadmaps and a RL based local planner to guide the robot for long-range indoor navigation.…”
Section: Navigation Based On Deep Reinforcement Learningmentioning
confidence: 99%
“…Thanks to their wide field of view, they can capture their entire environment in a single image. These sensors are therefore used in many mobile robotics tasks, such as image-based navigation [1], monitoring [2] or simultaneous visual localization and mapping (SLAM) [3].…”
Section: Introductionmentioning
confidence: 99%