2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017
DOI: 10.1109/cvpr.2017.264
|View full text |Cite
|
Sign up to set email alerts
|

A Point Set Generation Network for 3D Object Reconstruction from a Single Image

Abstract: Generation of 3D data by deep neural network has been attracting increasing attention in the research community. The majority of extant works resort to regular representations such as volumetric grids or collection of images; however, these representations obscure the natural invariance of 3D shapes under geometric transformations, and also suffer from a number of other issues. In this paper we address the problem of 3D reconstruction from a single image, generating a straight-forward form of output -point clo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

7
1,552
0
3

Year Published

2017
2017
2023
2023

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 1,813 publications
(1,562 citation statements)
references
References 23 publications
(32 reference statements)
7
1,552
0
3
Order By: Relevance
“…If Gaussian noise were used as auxiliary input, an array of Gaussian noise was feed-forwarded together with an MRI slice in the training process as follows: 10 different sets of Gaussian noise were first generated and only the "best" set (i.e., the set that yielded the lowest M * loss (Equation 1)) was used to update the DEP model's parameters. Note that this approach is similar to and inspired by Min-of-N loss in 3D object reconstruction (Fan et al, 2017) and variety loss in Social GAN (Gupta et al, 2018). In the testing process, 10 different sets of Gaussian noise were generated and the average performance was calculated.…”
Section: Experiments Setupmentioning
confidence: 99%
“…If Gaussian noise were used as auxiliary input, an array of Gaussian noise was feed-forwarded together with an MRI slice in the training process as follows: 10 different sets of Gaussian noise were first generated and only the "best" set (i.e., the set that yielded the lowest M * loss (Equation 1)) was used to update the DEP model's parameters. Note that this approach is similar to and inspired by Min-of-N loss in 3D object reconstruction (Fan et al, 2017) and variety loss in Social GAN (Gupta et al, 2018). In the testing process, 10 different sets of Gaussian noise were generated and the average performance was calculated.…”
Section: Experiments Setupmentioning
confidence: 99%
“…Tulsianni et al [29] introduced ray-tracing into the picture to predict multiple semantics from an image including a 3D voxel model. Howbeit, voxel representation is known to be inefficient and computationally unfriendly [4,30]. For mesh representation, Wang et al [30] gradually deformed an elliptical mesh given an input image by using graph convolution, but mesh representation requires overhead construction, and graph convolution may result in computing redundancy as masking is needed.…”
Section: Related Workmentioning
confidence: 99%
“…In this regard, we present a novel deep method to reconstruct a 3D point cloud representation of an object from a single 2D image. Even though a point cloud representation does not possess appealing 3D geometrical properties like a mesh or CAD model, it is simple and efficient when it comes to transformation and deformation, and can produce high-quality shape models [4].…”
Section: Introductionmentioning
confidence: 99%
“…However, PointNet is not a generative model of point sets, but rather it maps input point sets to output such as a model classification, or part segmentation. In related work, a conditional generative model of unordered point sets was introduced in [FSG16], where given an image, a collection of 3D output points was synthesized that captures the coarse shape of objects in the image. The closest work to ours is Huang et al .…”
Section: Related Workmentioning
confidence: 99%