2020
DOI: 10.48550/arxiv.2007.08504
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Implicit Mesh Reconstruction from Unannotated Image Collections

Abstract: https://shubhtuls.github.io/imr/ Figure 1: Given a single input image, we can infer the shape, texture and camera viewpoint for the underlying object. In rows 1 and 2, we show the input image, inferred 3D shape and texture from the predicted viewpoint, and three novel viewpoints. We can learn 3D inference using only in-the-wild image collections with approximate instance segmentations, our approach can be easily applied across a diverse set of categories. Rows 3 and 4 show sample predictions across a broad set… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
40
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 15 publications
(40 citation statements)
references
References 28 publications
0
40
0
Order By: Relevance
“…With the advances in differentiable rendering (Kato et al, 2018;Laine et al, 2020;, these have also been leveraged in learning based frameworks for shape prediction (Kanazawa et al, 2018;Gkioxari et al, 2019;Goel et al, 2020) and view synthesis (Riegler and Koltun, 2020). Whereas these approaches use an explicit discrete mesh, some recent methods have proposed using continuous neural surface parametrization like ours to represent shape (Groueix et al, 2018) and texture (Tulsiani et al, 2020;Bhattad et al, 2021).…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…With the advances in differentiable rendering (Kato et al, 2018;Laine et al, 2020;, these have also been leveraged in learning based frameworks for shape prediction (Kanazawa et al, 2018;Gkioxari et al, 2019;Goel et al, 2020) and view synthesis (Riegler and Koltun, 2020). Whereas these approaches use an explicit discrete mesh, some recent methods have proposed using continuous neural surface parametrization like ours to represent shape (Groueix et al, 2018) and texture (Tulsiani et al, 2020;Bhattad et al, 2021).…”
Section: Related Workmentioning
confidence: 99%
“…Inspired by Groueix et al (2018); Tulsiani et al (2020), we address these challenges by adopting a continuous surface representation via a neural network. We illustrate this representation in Fig.…”
Section: Camera Surfacementioning
confidence: 99%
“…3D Deformable Mesh Representations. Our algorithm is also related to methods [13,14,15,5] that represent instance shapes as mesh deformations of a mesh primitive, i.e. a sphere template.…”
Section: Related Workmentioning
confidence: 99%
“…In this work, we introduce a novel canonical space where dense (i.e., point-level) correspondences for all the shapes of a category can be explicitly obtained from. Inspired by 3D mesh representation [13,14,15,5] where shapes from one category are represented as deformations on top of a shape primitive, in our work, we set the canonical space as a 3D UV sphere. Our goal is to learn a "point cloud-to-sphere mapping" such that corresponding parts from different instances overlap when mapped onto the canonical sphere.…”
Section: Introductionmentioning
confidence: 99%
“…Early methods employ inflation techniques to extract shapes from silhouettes [36,49]. More recent methods learn end-to-end predictors, such as CMR [25], U-CMR [17] and IMR [48], to produce textured 3D animal mesh from a single image. So far, the outputs of these methods have limited realism and cannot be re-animated.…”
Section: Related Workmentioning
confidence: 99%