Abstract:In this work, we discuss enhanced full 360 • 3D reconstruction of dynamic scenes containing non-rigidly deforming objects using data acquired from commodity depth or 3D cameras. Several approaches for enhanced and full 3D reconstruction of non-rigid objects have been proposed in the literature. These approaches suffer from several limitations due to requirement of a template, inability to tackle large local deformations and topology changes, inability to tackle highly noisy and low-resolution data, and inabili… Show more
Reconstructing three-dimensional (3D) objects from images has attracted increasing attention due to its wide applications in computer vision and robotic tasks. Despite the promising progress of recent deep learning–based approaches, which directly reconstruct the full 3D shape without considering the conceptual knowledge of the object categories, existing models have limited usage and usually create unrealistic shapes. 3D objects have multiple forms of representation, such as 3D volume, conceptual knowledge, and so on. In this work, we show that the conceptual knowledge for a category of objects, which represents objects as prototype volumes and is structured by graph, can enhance the 3D reconstruction pipeline. We propose a novel multimodal framework that explicitly combines graph-based conceptual knowledge with deep neural networks for 3D shape reconstruction from a single RGB image. Our approach represents conceptual knowledge of a specific category as a structure-based knowledge graph. Specifically, conceptual knowledge acts as visual priors and spatial relationships to assist the 3D reconstruction framework to create realistic 3D shapes with enhanced details. Our 3D reconstruction framework takes an image as input. It first predicts the conceptual knowledge of the object in the image, then generates a 3D object based on the input image and the predicted conceptual knowledge. The generated 3D object satisfies the following requirements: (1) it is consistent with the predicted graph in concept, and (2) consistent with the input image in geometry. Extensive experiments on public datasets (i.e., ShapeNet, Pix3D, and Pascal3D+) with 13 object categories show that (1) our method outperforms the state-of-the-art methods, (2) our prototype volume-based conceptual knowledge representation is more effective, and (3) our pipeline-agnostic approach can enhance the reconstruction quality of various 3D shape reconstruction pipelines.
Recovering the geometry of an object from a single depth image is an interesting yet challenging problem. While previous learning based approaches have demonstrated promising performance, they don’t fully explore spatial relationships of objects, which leads to unfaithful and incomplete 3D reconstruction. To address these issues, we propose a
Spatial Relationship Preserving Adversarial Network (SRPAN)
consisting of
3D Capsule Attention Generative Adversarial Network (3DCAGAN)
and
2D Generative Adversarial Network (2DGAN)
for coarse-to-fine 3D reconstruction from a single depth view of an object. Firstly, 3DCAGAN predicts the coarse geometry using an encoder-decoder based generator and a discriminator. The generator encodes the input as latent capsules represented as stacked activity vectors with local-to-global relationships (i.e., the contribution of components to the whole shape), and then decodes the capsules by modeling local-to-local relationships (i.e., the relationships among components) in an attention mechanism. Afterwards, 2DGAN refines the local geometry slice-by-slice, by using a generator learning a global structure prior as guidance, and stacked discriminators enforcing local geometric constraints. Experimental results show that SRPAN not only outperforms several state-of-the-art methods by a large margin on both synthetic datasets and real-world datasets, but also reconstructs unseen object categories with a higher accuracy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.