2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020
DOI: 10.1109/cvpr42600.2020.01113
|View full text |Cite
|
Sign up to set email alerts
|

SurfelGAN: Synthesizing Realistic Sensor Data for Autonomous Driving

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
23
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 74 publications
(27 citation statements)
references
References 18 publications
0
23
0
Order By: Relevance
“…Therefore, we can easily generate highdimensional sensing scenarios. SurfelGAN is proposed in [34] to directly generate point cloud data to represent scenarios from the view of the AV. [35] can add new vehicles to collected driving videos to generate realistic video scenarios, where they also consider the motion planning of vehicles.…”
Section: Deep Generative Modelsmentioning
confidence: 99%
“…Therefore, we can easily generate highdimensional sensing scenarios. SurfelGAN is proposed in [34] to directly generate point cloud data to represent scenarios from the view of the AV. [35] can add new vehicles to collected driving videos to generate realistic video scenarios, where they also consider the motion planning of vehicles.…”
Section: Deep Generative Modelsmentioning
confidence: 99%
“…In this case, building autonomous vehicle simulators with high fidelity could help (e.g. [144], [145]). It is an important and challenging future research direction to build such a simulator that could take the predictive uncertainty from probabilistic object detectors as inputs, and evaluate probabilistic detection in a full software stack with novel metrics.…”
Section: Better Evaluation Of Probabilistic Object Detectionmentioning
confidence: 99%
“…Similar in flavour, therefore, to recent approaches proposed for unaligned domain transfer in vision [15], [16], [35], [36], we too consider learning unaligned mappings between simulated world layouts and real radar observations. We model the forward and backward model side-by-side using adversarial and cyclical consistency losses.…”
Section: Related Workmentioning
confidence: 99%
“…We model the forward and backward model side-by-side using adversarial and cyclical consistency losses. Unlike in [15], [16], [35], [36] which consider a deterministic oneto-one mapping between domains, we adopt an inherently probabilistic approach. We capture a distribution over possible power returns to account for the stochastic noise processes arising throughout the radar sensing pipeline.…”
Section: Related Workmentioning
confidence: 99%