2019
DOI: 10.48550/arxiv.1904.01201
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Habitat: A Platform for Embodied AI Research

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
62
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
7
1
1

Relationship

1
8

Authors

Journals

citations
Cited by 26 publications
(63 citation statements)
references
References 0 publications
1
62
0
Order By: Relevance
“…We endow the dataset with the same structure as AVD by densely sampling navigable positions at 30cm intervals on the occupancy map of each scene. At each navigable position, we render RGB images, semantic segmentations and depth images from 12 different orientations at 30 • intervals through Habitat-Sim [20]. The images are connected through actions based on their spatial neighborhood and orientations.…”
Section: Methodsmentioning
confidence: 99%
“…We endow the dataset with the same structure as AVD by densely sampling navigable positions at 30cm intervals on the occupancy map of each scene. At each navigable position, we render RGB images, semantic segmentations and depth images from 12 different orientations at 30 • intervals through Habitat-Sim [20]. The images are connected through actions based on their spatial neighborhood and orientations.…”
Section: Methodsmentioning
confidence: 99%
“…Publicly available simulated environments are playing an important role in the development of RL methods, provide a common ground for comparing different approaches, and allow to track the progress of the field. Simulated environments address various general aspects of reinforcement learning research such as control [48], navigation [50], [51], [52], [53], physical interactions [49] and perception [54]. More domain-specific environments explore such fields as robotics [55], [56], [57] and autonomous driving [58].…”
Section: Related Workmentioning
confidence: 99%
“…Extensive work has demonstrated that, after learning with indirect supervision from a reward function, rich representations for their task automatically emerge (Bansal et al, 2017;Lowe et al, 2017;Jaderberg et al, 2019). Several recent works have created 3D embodiment simulation environment (Kolve et al, 2017;Brodeur et al, 2017;Savva et al, 2017;Das et al, 2018;Xia et al, 2018;Savva et al, 2019) for navigation and visual question answering tasks. To train these models, visual navigation is often framed as a reinforcement learning problem (Chen et al, 2015;Giusti et al, 2015;Oh et al, 2016;Abel et al, 2016;Bhatti et al, 2016;Daftry et al, 2016;Mirowski et al, 2016;Brahmbhatt & Hays, 2017;Zhang et al, 2017a;Zhu et al, 2017a;Kahn et al, 2018).…”
Section: Related Workmentioning
confidence: 99%