The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020
DOI: 10.1109/cvpr42600.2020.00703
|View full text |Cite
|
Sign up to set email alerts
|

DeepDeform: Learning Non-Rigid RGB-D Reconstruction With Semi-Supervised Data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
26
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
4
2

Relationship

0
10

Authors

Journals

citations
Cited by 52 publications
(27 citation statements)
references
References 42 publications
0
26
0
Order By: Relevance
“…In order to unambiguously and reliably represent and describe these objects, we need to compute invariant features of the object's under these non-rigid deformations. Therefore, non-rigid tracking and reconstruction (Bozic et al, 2020) are tasks that can directly profit from methods that robustly handle correspondences of surfaces under deformation. Some existing keypoint description approaches approximate geometric intrinsic cues solely from image intensity, such as done in DaLI descriptor (Moreno-Noguer, 2011), however they suffer from high computational costs and loss of distinctiveness in exchange for its robustness to deformations.…”
Section: Arxiv:220312016v1 [Cscv] 22 Mar 2022mentioning
confidence: 99%
“…In order to unambiguously and reliably represent and describe these objects, we need to compute invariant features of the object's under these non-rigid deformations. Therefore, non-rigid tracking and reconstruction (Bozic et al, 2020) are tasks that can directly profit from methods that robustly handle correspondences of surfaces under deformation. Some existing keypoint description approaches approximate geometric intrinsic cues solely from image intensity, such as done in DaLI descriptor (Moreno-Noguer, 2011), however they suffer from high computational costs and loss of distinctiveness in exchange for its robustness to deformations.…”
Section: Arxiv:220312016v1 [Cscv] 22 Mar 2022mentioning
confidence: 99%
“…Volumetric fusion based methods [32,56,59,61,66] allow free-form dynamic reconstruction in a template-free, single-view, real-time way, through updating depth into the canonical model and performing non-rigid deformation. A series of works are proposed to make volumetric fusion more robust with SIFT features [17], human articulated skeleton prior [59,61], extra IMU sensors [66], data-driven prior [43], learned correspondences [3] or neural deformation graph [2]. Since these single-view setups suffer from tracking error in the occluded parts, multi-view setups are introduced to mitigate this problem with improved fusion methods.…”
Section: Related Workmentioning
confidence: 99%
“…Pirors like skeleton [Yu et al 2017], parametric body shape or inertial measurement units ] are used to facilitate the fusion. [Bozic et al 2020] applies data-driven approaches for non-rigid 3D reconstruction. Rather than using a strict photometric consistency criterion, [Lombardi et al 2019] learn a generative model that tries to best match the input images without assuming that objects in the scene are compositions of flat surfaces.…”
Section: Related Workmentioning
confidence: 99%