2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2020
DOI: 10.1109/iros45743.2020.9341511
|View full text |Cite
|
Sign up to set email alerts
|

Learning Your Way Without Map or Compass: Panoramic Target Driven Visual Navigation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
8
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 14 publications
(9 citation statements)
references
References 20 publications
0
8
0
Order By: Relevance
“…The community has built several indoor navigation simulators [41,57,40,27] on top of photo-realistic scans of 3D environments [27,6,47,56,55]. To test a robot's ability to perceive, navigate and interact with the environment, the community has also introduced several tasks [57,5,45,10,52,36,3,28,48,22,21,51,16,34,33,31,32] and benchmarks. Specifically, Batra et al [5] introduce evaluation details for the task of Object Navigation, requiring the agent to navigate to a given object class instead of a final point-goal.…”
Section: Related Workmentioning
confidence: 99%
“…The community has built several indoor navigation simulators [41,57,40,27] on top of photo-realistic scans of 3D environments [27,6,47,56,55]. To test a robot's ability to perceive, navigate and interact with the environment, the community has also introduced several tasks [57,5,45,10,52,36,3,28,48,22,21,51,16,34,33,31,32] and benchmarks. Specifically, Batra et al [5] introduce evaluation details for the task of Object Navigation, requiring the agent to navigate to a given object class instead of a final point-goal.…”
Section: Related Workmentioning
confidence: 99%
“…To train PointNav policy that does not require a semantic sensor, we used the faster version of Habitat (that does not support native semantic segmentation sensor), the BPS simulator (Shacklett et al 2021), which was 100x timed faster. With a similar approach, these works (Kadian et al 2020) (Watkins-Valls et al 2020 showed the ability to transfer the RL model from a simulated environment to real-world usage.…”
Section: Related Workmentioning
confidence: 99%
“…A fouth action called "Done" is executed whenever the agent is within 0.2 m from the goal position. Note that in [1], they trained an agent to also learn this trivial task, but this was not the case in [11], [6]; we take the latter approach. A schematic of the overall setup is shown in Fig.…”
Section: Problem Setupmentioning
confidence: 99%
“…They demonstrated robust learning of both perception and policy on all three tasks, including transfer to new visual environments as well as to new embodied tasks. In [11], the authors undertook Imitation Learning to train a robot to navigate in the Gibson simulator using the Dijkstra algorithm and obtained high success rates. In another study, the authors used DRL for target-driven robot navigation in an indoor scenes simulator called AI2THOR [12], where only RGB images of the state and the target are used to train the navigation policy network.…”
Section: Introductionmentioning
confidence: 99%