2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2022
DOI: 10.1109/cvpr52688.2022.01565
|View full text |Cite
|
Sign up to set email alerts
|

Efficient Geometry-aware 3D Generative Adversarial Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
373
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 537 publications
(435 citation statements)
references
References 41 publications
2
373
0
Order By: Relevance
“…An example of synthetic data rendering process is to center a scan (e.g., objects from ScanNet [66], ShapeNet [67], or DeepVoxels [14]) at the origin, scale it to lie within the unit cube, and render images at sampled viewpoints. The training set can be obtained by 2019 2020 2021 2022 2023 S 2 -GAN [12] PrGAN [13] DeepVoxel [14] VON [15] HoloGAN [16] SRN [17] GANSteerability [18] RGBD-GAN [19] DVR [7] BlockGAN [20] GANLatentDiscovery [21] NeRF [5] StyleRig [22] CONFIG [23] GANSpace [24]) InterFaceGAN [25]) NGP [26] SeFa [27] GRAF [28] NeRF-W [29] PIE [30] NeRF++ [31] GIRAFFE [32] pi-GAN [2] PixelNeRF [33] GAN-Control [34] NeRF- [35] KiloNeRF [36] Mip-NeRF [37] FastNeRF [38] CAMPARI [39] BARF [40] VariTex [41] Liao et al [42] ShadeGAN [43] CIPS-3D [44] StyleNeRF [1] GOF [45] LOLNeRF [46] URF [47] EG3D [48] GRAM [49] StyleSDF ...…”
Section: Multiple-view Image Collectionsmentioning
confidence: 99%
See 4 more Smart Citations
“…An example of synthetic data rendering process is to center a scan (e.g., objects from ScanNet [66], ShapeNet [67], or DeepVoxels [14]) at the origin, scale it to lie within the unit cube, and render images at sampled viewpoints. The training set can be obtained by 2019 2020 2021 2022 2023 S 2 -GAN [12] PrGAN [13] DeepVoxel [14] VON [15] HoloGAN [16] SRN [17] GANSteerability [18] RGBD-GAN [19] DVR [7] BlockGAN [20] GANLatentDiscovery [21] NeRF [5] StyleRig [22] CONFIG [23] GANSpace [24]) InterFaceGAN [25]) NGP [26] SeFa [27] GRAF [28] NeRF-W [29] PIE [30] NeRF++ [31] GIRAFFE [32] pi-GAN [2] PixelNeRF [33] GAN-Control [34] NeRF- [35] KiloNeRF [36] Mip-NeRF [37] FastNeRF [38] CAMPARI [39] BARF [40] VariTex [41] Liao et al [42] ShadeGAN [43] CIPS-3D [44] StyleNeRF [1] GOF [45] LOLNeRF [46] URF [47] EG3D [48] GRAM [49] StyleSDF ...…”
Section: Multiple-view Image Collectionsmentioning
confidence: 99%
“…Shape Quality is evaluated mostly by calculating differences between the rendered depth map and the pseudoground-truth depth, e.g. using MSE [48] or a modified Chamfer distance [50]. For example, given two generated images from two sampled angles of the same scene, Shi et al [52] uses rotation precision and rotation consistency to evaluate the quality of the depth maps (point cloud).…”
Section: Multi-view 3d Consistencymentioning
confidence: 99%
See 3 more Smart Citations