2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.00328
|View full text |Cite
|
Sign up to set email alerts
|

Deep Optimized Priors for 3D Shape Modeling and Reconstruction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
16
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
4
1

Relationship

1
8

Authors

Journals

citations
Cited by 24 publications
(17 citation statements)
references
References 20 publications
0
16
0
Order By: Relevance
“…Note that the most recent implicit methods [79], [80] allow the priors of model parameters to be further optimized when fitting to the observed points, i.e., optimizing over both the latent code and model parameters in Eq. (11) or Eq.…”
Section: Learning-based Priorsmentioning
confidence: 99%
“…Note that the most recent implicit methods [79], [80] allow the priors of model parameters to be further optimized when fitting to the observed points, i.e., optimizing over both the latent code and model parameters in Eq. (11) or Eq.…”
Section: Learning-based Priorsmentioning
confidence: 99%
“…This work focuses on soft robot shape representation through a triangle-based mesh to ensure compatibility with modern graphics engines. In the broader domain of shape reconstruction, certain computational techniques have been applied to reconstruct meshes from differing data sources, including 3D pointclouds ( Yang et al, 2021 , or 2D image inputs ( Kolotouros et al, 2019 ; Nguyen et al, 2022 )). Our method relies on skeletal animation to visualize the deformed soft robot shape from the sensor data in real-time.…”
Section: Related Workmentioning
confidence: 99%
“…Deep Shape Prior. Beside the priors reviewed above, shape priors can also be captured by parameters in neural networks in shape reconstruction [3,16,20,21,25,28,34,65,74,82], segmentation [45,60,61], and completion [32,33,76,80,84]. Deep manifold prior [17] was introduced to reconstruct 3D shapes starting from random initializations.…”
Section: Related Workmentioning
confidence: 99%