2020
DOI: 10.48550/arxiv.2008.09092
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Meta-Sim2: Unsupervised Learning of Scene Structure for Synthetic Data Generation

Abstract: Procedural models are being widely used to synthesize scenes for graphics, gaming, and to create (labeled) synthetic datasets for ML. In order to produce realistic and diverse scenes, a number of parameters governing the procedural models have to be carefully tuned by experts. These parameters control both the structure of scenes being generated (e.g. how many cars in the scene), as well as parameters which place objects in valid configurations. Meta-Sim aimed at automatically tuning parameters given a target … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 51 publications
(97 reference statements)
0
3
0
Order By: Relevance
“…Another common approach to generating environments is using a probabilistic grammar. This technique has been used to create generative models of buildings [18], traffic scenes [19], and even plants [20]. However, while these methods excel at generating inputs with rich structure, phrasing a generator as a grammar makes it difficult to specify the inherently relational constraints between objects that our domain requires.…”
Section: Related Literaturementioning
confidence: 99%
“…Another common approach to generating environments is using a probabilistic grammar. This technique has been used to create generative models of buildings [18], traffic scenes [19], and even plants [20]. However, while these methods excel at generating inputs with rich structure, phrasing a generator as a grammar makes it difficult to specify the inherently relational constraints between objects that our domain requires.…”
Section: Related Literaturementioning
confidence: 99%
“…The goal of data-driven simulation is to learn simulators given observations from the environment to be simulated. Meta-Sim [27,9] learns to produce scene parameters in a synthetic scene. LiDARSim [42] leverages deep learning and physics engine to produce LiDAR point clouds.…”
Section: Data-driven Simulation and Model-based Rlmentioning
confidence: 99%
“…Li-darSim [42] used a catalog of annotated 3D scenes to sample layouts into which reconstructed objects obtained from a large number of recorded drives are placed, in the quest to achieve diversity for training and testing a LIDAR-based perception system. [27,9,51], on the other hand, learn to synthesize road-scene 3D layouts directly from images without supervision. These works do not model the dynamics of the environment and object behaviors.…”
Section: Introductionmentioning
confidence: 99%