2022
DOI: 10.1007/978-3-031-19769-7_29
|View full text |Cite
|
Sign up to set email alerts
|

2D GANs Meet Unsupervised Single-View 3D Reconstruction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 10 publications
(5 citation statements)
references
References 46 publications
0
5
0
Order By: Relevance
“…Other interesting works include: SIREN [138] and Fourier Feature Networks [142] for improving the representation ability of neural implicit by applying sine activation functions and Fourier feature mapping, respectively; InstantNGP [104] for greatly improving the training speed by applying multi-resolution Hash encoding; [89] for introducing a method to reconstruct 3D shapes from single-view 2D images with the help of pseudo multi-view images generated by StyleGAN [70]; MeshSDF [121] for designing differentiable iso-surface extraction algorithm (Marching Cubes) on neural implicit; SAL (Sign Agnostic Learning) [121] for proposing a method to learn neural implicit from unoriented point clouds; Neu-ralUDF [27] and MeshUDF [50] for proposing Neural unsigned distance fields and how to extract explicit meshes from them, respectively; GIFS [179] for proposing a neural field to represent general shapes including non-watertight shapes and shapes with multi-layer surfaces, by predicting whether two points are separated by any surface, instead of the inside-outside of each point.…”
Section: Neural Implicitmentioning
confidence: 99%
See 1 more Smart Citation
“…Other interesting works include: SIREN [138] and Fourier Feature Networks [142] for improving the representation ability of neural implicit by applying sine activation functions and Fourier feature mapping, respectively; InstantNGP [104] for greatly improving the training speed by applying multi-resolution Hash encoding; [89] for introducing a method to reconstruct 3D shapes from single-view 2D images with the help of pseudo multi-view images generated by StyleGAN [70]; MeshSDF [121] for designing differentiable iso-surface extraction algorithm (Marching Cubes) on neural implicit; SAL (Sign Agnostic Learning) [121] for proposing a method to learn neural implicit from unoriented point clouds; Neu-ralUDF [27] and MeshUDF [50] for proposing Neural unsigned distance fields and how to extract explicit meshes from them, respectively; GIFS [179] for proposing a neural field to represent general shapes including non-watertight shapes and shapes with multi-layer surfaces, by predicting whether two points are separated by any surface, instead of the inside-outside of each point.…”
Section: Neural Implicitmentioning
confidence: 99%
“…Single-View Reconstruction by Cross-Instance Consistency" (UNICORN) [103] and Table 1 in "2D GANs Meet Unsupervised Single-view 3D Reconstruction" (GANSVR) [89] are great summaries of recent works on this topic. We recommend interested readers to take a look.…”
Section: With 2d Supervisionmentioning
confidence: 99%
“…Another popular method recently is to learn a neural network which approximates the implicit function [16][17][18][19][20][21]. Besides, methods based on generative adversarial network (GAN) have been applied to 3D shape reconstruction or style reconstruction after making significant progress in 2D generation [22][23][24][25][26]. These methods have accomplished remarkable results, but they lack interpretability or cannot provide a detailed step-by-step reconstruction process.…”
Section: Related Workmentioning
confidence: 99%
“…It introduced a mapping network, which mapped the sampled noise into another latent space, which is more disentangled and semantically coherent, as demonstrated by its downstream usage for image editing and manipulation [1,33,34,43,44]. Further, StyleGAN has been extended to get novel views from images [27,29,45], thus making it possible to get 3D information from it. These downstream advances are possible due to the impressive performance of StyleGANs on class-specific datasets (such as faces).…”
Section: Related Workmentioning
confidence: 99%