2023
DOI: 10.1109/tro.2023.3235591
|View full text |Cite
|
Sign up to set email alerts
|

Close the Optical Sensing Domain Gap by Physics-Grounded Active Stereo Sensor Simulation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 13 publications
(3 citation statements)
references
References 74 publications
0
1
0
Order By: Relevance
“…A major obstacle is the high demand for massive amounts of interaction data, which are costly and time-consuming to collect in reality. Sim2Real addresses this problem by enabling robots to be trained in simulation and transferring the learned policies to real robots with minimal or zero real-world data [7], [8], [9]. Furthermore, the access to ground-truth environment states in simulation accelerates the policy learning process.…”
Section: Zero-shot Sim2realmentioning
confidence: 99%
“…A major obstacle is the high demand for massive amounts of interaction data, which are costly and time-consuming to collect in reality. Sim2Real addresses this problem by enabling robots to be trained in simulation and transferring the learned policies to real robots with minimal or zero real-world data [7], [8], [9]. Furthermore, the access to ground-truth environment states in simulation accelerates the policy learning process.…”
Section: Zero-shot Sim2realmentioning
confidence: 99%
“…RFUniverse supports a set of physics realistic sensors to equip the agent with the capability of perception. For visual input, we leverage ray-tracing techniques and integrated the IR-based depth rendering proposed in Sapien [50] to RFUniverse, which mimics the sensor noise of IR-based depth sensors like RealSense D415 camera. Besides, we also notice the trending of vision-based tactile sensing research in the robotics community [34,8,42,39,37].…”
Section: Multi-modal Sensingmentioning
confidence: 99%
“…Image Labels: To generate synthetic image observations (RGB and depth) as well as pose and instance segmentation labels from the precomputed GIGA scenes, we use the raytracingbased renderer from SAPIEN [13] and its realistic depth feature [37]. All the textures, materials, lights, and table shapes are randomized, see Fig.…”
Section: Data Generationmentioning
confidence: 99%