2019
DOI: 10.1007/978-981-13-9364-8_25
|View full text |Cite
|
Sign up to set email alerts
|

Generative Adversarial Networks as an Advancement in 2D to 3D Reconstruction Techniques

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 41 publications
0
2
0
Order By: Relevance
“…The first one is the advancements in machine learning research. The growing use of artificial neural networks (ANNs) in computational design is a reflection of the fast advancement in research in generative models (Dhondse, 2020) and true distributions (Alcin, 2019). This is as well as the increasing computational power available and training data sets available (Hodas & Stinis, 2018).…”
Section: Self-organizing Floor Plan At Workmentioning
confidence: 99%
“…The first one is the advancements in machine learning research. The growing use of artificial neural networks (ANNs) in computational design is a reflection of the fast advancement in research in generative models (Dhondse, 2020) and true distributions (Alcin, 2019). This is as well as the increasing computational power available and training data sets available (Hodas & Stinis, 2018).…”
Section: Self-organizing Floor Plan At Workmentioning
confidence: 99%
“…For example, 3D Recurrent Reconstruction Neural Network (3D-R2N2) for multi-view reconstruction on the Sanford Online Products [19] and ShapeNet [20] datasets, has managed to achieve this task with fewer images available with competitive results [21], with the proposed improvement that uses densely connected structure as encoder and utilizing Chamfer Distance as loss function [22]. Additionally, Generative Adversarial Networks (GANs) can be used to generate 3D objects from multiple 2D views [23] or even from a single image [24]. GANs have also been shown to be able to predict former geometry of damaged objects [25].…”
Section: Introductionmentioning
confidence: 99%