“…In order to unambiguously and reliably represent and describe these objects, we need to compute invariant features of the object's under these non-rigid deformations. Therefore, non-rigid tracking and reconstruction (Bozic et al, 2020) are tasks that can directly profit from methods that robustly handle correspondences of surfaces under deformation. Some existing keypoint description approaches approximate geometric intrinsic cues solely from image intensity, such as done in DaLI descriptor (Moreno-Noguer, 2011), however they suffer from high computational costs and loss of distinctiveness in exchange for its robustness to deformations.…”
Section: Arxiv:220312016v1 [Cscv] 22 Mar 2022mentioning
“…In order to unambiguously and reliably represent and describe these objects, we need to compute invariant features of the object's under these non-rigid deformations. Therefore, non-rigid tracking and reconstruction (Bozic et al, 2020) are tasks that can directly profit from methods that robustly handle correspondences of surfaces under deformation. Some existing keypoint description approaches approximate geometric intrinsic cues solely from image intensity, such as done in DaLI descriptor (Moreno-Noguer, 2011), however they suffer from high computational costs and loss of distinctiveness in exchange for its robustness to deformations.…”
Section: Arxiv:220312016v1 [Cscv] 22 Mar 2022mentioning
“…Volumetric fusion based methods [32,56,59,61,66] allow free-form dynamic reconstruction in a template-free, single-view, real-time way, through updating depth into the canonical model and performing non-rigid deformation. A series of works are proposed to make volumetric fusion more robust with SIFT features [17], human articulated skeleton prior [59,61], extra IMU sensors [66], data-driven prior [43], learned correspondences [3] or neural deformation graph [2]. Since these single-view setups suffer from tracking error in the occluded parts, multi-view setups are introduced to mitigate this problem with improved fusion methods.…”
4D modeling of human-object interactions is critical for numerous applications. However, efficient volumetric capture and rendering of complex interaction scenarios, especially from sparse inputs, remain challenging. In this paper, we propose NeuralFusion, a neural approach for volumetric human-object capture and rendering using sparse consumer RGBD sensors. It marries traditional non-rigid fusion with recent neural implicit modeling and blending advances, where the captured humans and objects are layerwise disentangled. For geometry modeling, we propose a neural implicit inference scheme with non-rigid key-volume fusion, as well as a template-aid robust object tracking pipeline. Our scheme enables detailed and complete geometry generation under complex interactions and occlusions. Moreover, we introduce a layer-wise human-object texture rendering scheme, which combines volumetric and image-based rendering in both spatial and temporal domains to obtain photo-realistic results. Extensive experiments demonstrate the effectiveness and efficiency of our approach in synthesizing photo-realistic free-view results under complex human-object interactions.
“…Pirors like skeleton [Yu et al 2017], parametric body shape or inertial measurement units ] are used to facilitate the fusion. [Bozic et al 2020] applies data-driven approaches for non-rigid 3D reconstruction. Rather than using a strict photometric consistency criterion, [Lombardi et al 2019] learn a generative model that tries to best match the input images without assuming that objects in the scene are compositions of flat surfaces.…”
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.