Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
2020
DOI: 10.48550/arxiv.2012.02924
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

iGibson 1.0: a Simulation Environment for Interactive Tasks in Large Realistic Scenes

Abstract: We present iGibson, a novel simulation environment to develop robotic solutions for interactive tasks in largescale realistic scenes. Our environment contains fifteen fully interactive home-sized scenes populated with rigid and articulated objects. The scenes are replicas of 3D scanned real-world homes, aligning the distribution of objects and layout to that of the real world. iGibson integrates several key features to facilitate the study of interactive tasks: i) generation of high-quality visual virtual sens… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
29
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 26 publications
(29 citation statements)
references
References 63 publications
0
29
0
Order By: Relevance
“…4), we create five realistic multi-stage simulated household kitchen tasks and collect a large-scale multi-user demonstration dataset. We design the simulated tasks in a realistic kitchen environment using PyBullet [60] and the iGibson [61,62] framework with a Fetch [63] robot that must manipulate a bowl. Across all tasks, the robot's initial pose and bowl location is randomized between episodes.…”
Section: Simulated Kitchen Datasetmentioning
confidence: 99%
See 1 more Smart Citation
“…4), we create five realistic multi-stage simulated household kitchen tasks and collect a large-scale multi-user demonstration dataset. We design the simulated tasks in a realistic kitchen environment using PyBullet [60] and the iGibson [61,62] framework with a Fetch [63] robot that must manipulate a bowl. Across all tasks, the robot's initial pose and bowl location is randomized between episodes.…”
Section: Simulated Kitchen Datasetmentioning
confidence: 99%
“…Both the teleoperator and agent are limited to on-board sensors, and are not provided any privileged information such as low-level object states or a global camera view: the imitation learning policy uses as input only RGB-D images and laser scan point clouds. The visual observations provided by the simulator, iGibson [62] are high-quality, with natural textures and materials rendered with a physics-based renderer, background and lighting probes obtained from real world. Moreover, the model used for our simulated robot closely aligns with the real Fetch robot, with identical sensor specifications, kinematics, and control schemes.…”
Section: A10 Sim2real Potentialmentioning
confidence: 99%
“…Hypersim [16] and OpenRooms [17] develop simulators for indoor object detection. Robotic simulators include AI-2THOR [18], Habitat [19,20], NVIDIA Isaac Sim [21], and iGibson [22] focus largely on embodied AI tasks. More generic tools for object detection dataset generation include BlenderProc [23], BlendTorch [24], NVISII [25], and the Unity Perception package [7].…”
Section: Related Workmentioning
confidence: 99%
“…In the past few years, researchers have developed many simulation environments [6,7,13,5,9] to serve as training and evaluation platforms for embodied agents. These simulation environments propel research progress in a wide range of embodied tasks, including vision-and-language task completion [10,30], rearrangement [12,7], navigation [9,13], manipulation [31,32] and human-robot collaboration [5]. Recently, AllenAct [33] integrates a set of embodied environments (such as iThor, RoboThor, Habitat [9], etc.…”
Section: Related Workmentioning
confidence: 99%
“…Embodied artificial intelligence (EAI) has attracted significant attention, both in advanced deep learning models and algorithms [1,2,3,4] and the rapid development of simulated platforms [5,6,7,8,9]. Many open challenges [10,11,12,13] have been proposed to facilitate EAI research. A critical bottleneck in existing simulated platforms [10,12,8,5,14] is the limited number of indoor scenes that support vision-and-language navigation, object interaction, and complex household tasks.…”
Section: Introductionmentioning
confidence: 99%