We present a 3D scanning system for deformable objects that uses only a single Kinect sensor. Our work allows considerable amount of nonrigid deformations during scanning, and achieves high quality results without heavily constraining user or camera motion. We do not rely on any prior shape knowledge, enabling general object scanning with freeform deformations. To deal with the drift problem when nonrigidly aligning the input sequence, we automatically detect loop closures, distribute the alignment error over the loop, and finally use a bundle adjustment algorithm to optimize for the latent 3D shape and nonrigid deformation parameters simultaneously. We demonstrate high quality scanning results in some challenging sequences, comparing with state of art nonrigid techniques, as well as ground truth data.
No abstract
Figure 1: We present a new method for real-time high quality 4D (i.e. spatio-temporally coherent) performance capture, allowing for incremental nonrigid reconstruction from noisy input from multiple RGBD cameras. Our system demonstrates unprecedented reconstructions of challenging nonrigid sequences, at real-time rates, including robust handling of large frame-to-frame motions and topology changes. izing the nonrigid scene motion. Our approach is highly robust to 6 both large frame-to-frame motion and topology changes, allowing 7 us to reconstruct extremely challenging scenes. We demonstrate 8 advantages over related real-time techniques that either deform an 9 online generated template or continually fuse depth data nonrigidly person removing a worn jacket or interlocked hands separating apart. live in full 3D, or even the ability to communicate in real-time with 38 remotely captured people using immersive AR/VR displays. 39However, despite remarkable progress in offline performance capture 40 over the years (see [Theobalt et al. 2010; Ye et al. 2013 systems find correspondences by assuming small frame-to-frame 58 motions, which makes the nonrigid estimation brittle in the presence 59 of large movements. 60We contribute Fusion4D, a new pipeline for live multi-view perfor-61 mance capture, generating temporally coherent high-quality recon-62 structions in real-time, with several unique capabilities over this 63 prior work: (1) We make no prior assumption regarding the captured 64 scene, operating without a skeleton or template model, allowing 65 reconstruction of arbitrary scenes; (2) We are highly robust to both 66 large frame-to-frame motion and topology changes, allowing recon-67 struction of extremely challenging scenes; (3) We scale to multi-view 68 capture from multiple RGBD cameras, allowing for performance 69 capture at qualities never before seen in real-time systems. 1 This is a previous version of the article published in ACM Transactions on Graphics. 2016, 35(4) This is conceptually similar to the concept of a keyframe or anchor 204 frame used in nonrigid tracking [Guo et al. 2015; Collet et al. 2015; 205 Beeler et al. 2011], but this concept is extended for online nonrigid 206 volumetric reconstruction. 207We take multiple RGBD frames as input and first estimate a segmen- Raw Depth Acquisition and Preprocessing 226In terms of acquisition our setup is similar to [Collet et al. 2015], 227but with a reduced number of cameras and no green screen. Our Nonrigid Motion Field Estimation 255In each frame we observe N depthmaps, {Dn} neighboring ED nodes after the uniform sampling. 283We then represent the local deformation around each ED node g k 284 using an affine transformation A k ∈ R 3×3 and a translation t k ∈ 285 R 3 . In addition, a global rotation R ∈ SO(3) and translationparameterizes the deformation that warps any point v ∈ R 3 toEqually, a normal n will be transformed to1 A triangulation is also extracted which we use for rendering. 3Online Submission ID:and normalization is appl...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.