2020
DOI: 10.1007/978-3-030-58542-6_16
|View full text |Cite
|
Sign up to set email alerts
|

AutoSimulate: (Quickly) Learning Synthetic Data Generation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
15
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 16 publications
(20 citation statements)
references
References 36 publications
0
15
0
Order By: Relevance
“…Our approach does not use scene grammar. Most similar to our work is the work of Auto-Simulate [4] that proposed a local approximation of the bilevel optimization to efficiently solve the problem. However, since they optimized non-differentiable simulators like Blender [10] and Arnold [21], they used REINFORCE-based [47] gradient update.…”
Section: Related Workmentioning
confidence: 94%
See 4 more Smart Citations
“…Our approach does not use scene grammar. Most similar to our work is the work of Auto-Simulate [4] that proposed a local approximation of the bilevel optimization to efficiently solve the problem. However, since they optimized non-differentiable simulators like Blender [10] and Arnold [21], they used REINFORCE-based [47] gradient update.…”
Section: Related Workmentioning
confidence: 94%
“…Learning simulator parameters. Related works in this space focus on learning non-differentiable simulator parameters for e.g., learning-to-simulate (LTS) [42], Meta-Sim [30], Meta-Sim2 [11], Auto-Sim [4], and others [51,17,32]. Our work in contrast has two differences: (i) a difference in the renderer used (NeRF vs traditional rendering engines), and (ii) a difference in the optimization approach.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations