Figure 1: We present a new method for real-time high quality 4D (i.e. spatio-temporally coherent) performance capture, allowing for incremental nonrigid reconstruction from noisy input from multiple RGBD cameras. Our system demonstrates unprecedented reconstructions of challenging nonrigid sequences, at real-time rates, including robust handling of large frame-to-frame motions and topology changes. izing the nonrigid scene motion. Our approach is highly robust to 6 both large frame-to-frame motion and topology changes, allowing 7 us to reconstruct extremely challenging scenes. We demonstrate 8 advantages over related real-time techniques that either deform an 9 online generated template or continually fuse depth data nonrigidly person removing a worn jacket or interlocked hands separating apart. live in full 3D, or even the ability to communicate in real-time with 38 remotely captured people using immersive AR/VR displays. 39However, despite remarkable progress in offline performance capture 40 over the years (see [Theobalt et al. 2010; Ye et al. 2013 systems find correspondences by assuming small frame-to-frame 58 motions, which makes the nonrigid estimation brittle in the presence 59 of large movements. 60We contribute Fusion4D, a new pipeline for live multi-view perfor-61 mance capture, generating temporally coherent high-quality recon-62 structions in real-time, with several unique capabilities over this 63 prior work: (1) We make no prior assumption regarding the captured 64 scene, operating without a skeleton or template model, allowing 65 reconstruction of arbitrary scenes; (2) We are highly robust to both 66 large frame-to-frame motion and topology changes, allowing recon-67 struction of extremely challenging scenes; (3) We scale to multi-view 68 capture from multiple RGBD cameras, allowing for performance 69 capture at qualities never before seen in real-time systems. 1 This is a previous version of the article published in ACM Transactions on Graphics. 2016, 35(4) This is conceptually similar to the concept of a keyframe or anchor 204 frame used in nonrigid tracking [Guo et al. 2015; Collet et al. 2015; 205 Beeler et al. 2011], but this concept is extended for online nonrigid 206 volumetric reconstruction. 207We take multiple RGBD frames as input and first estimate a segmen- Raw Depth Acquisition and Preprocessing 226In terms of acquisition our setup is similar to [Collet et al. 2015], 227but with a reduced number of cameras and no green screen. Our Nonrigid Motion Field Estimation 255In each frame we observe N depthmaps, {Dn} neighboring ED nodes after the uniform sampling. 283We then represent the local deformation around each ED node g k 284 using an affine transformation A k ∈ R 3×3 and a translation t k ∈ 285 R 3 . In addition, a global rotation R ∈ SO(3) and translationparameterizes the deformation that warps any point v ∈ R 3 toEqually, a normal n will be transformed to1 A triangulation is also extracted which we use for rendering. 3Online Submission ID:and normalization is appl...
No abstract
We propose a novel system for portrait relighting and background replacement, which maintains high-frequency boundary details and accurately synthesizes the subject's appearance as lit by novel illumination, thereby producing realistic composite images for any desired scene. Our technique includes foreground estimation via alpha matting, relighting, and compositing. We demonstrate that each of these stages can be tackled in a sequential pipeline without the use of priors (e.g. known background or known illumination) and with no specialized acquisition techniques, using only a single RGB portrait image and a novel, target HDR lighting environment as inputs. We train our model using relit portraits of subjects captured in a light stage computational illumination system, which records multiple lighting conditions, high quality geometry, and accurate alpha mattes. To perform realistic relighting for compositing, we introduce a novel per-pixel lighting representation in a deep learning framework, which explicitly models the diffuse and the specular components of appearance, producing relit portraits with convincingly rendered non-Lambertian effects like specular highlights. Multiple experiments and comparisons show the effectiveness of the proposed approach when applied to in-the-wild images.
Object grasping in domestic environments using social robots has an enormous potential to help dependent people with a certain degree of disability. In this chapter, the authors make use of the well-known Pepper social robot to carry out such task. They provide an integrated solution using ROS to recognize and grasp simple objects. That system was deployed on an accelerator platform (Jetson TX1) to be able to perform object recognition in real time using RGB-D sensors attached to the robot. By using the system, the authors prove that the Pepper robot shows a great potential for such domestic assistance tasks.
Object grasping in domestic environments using social robots has an enormous potential to help dependent people with a certain degree of disability. In this chapter, the authors make use of the well-known Pepper social robot to carry out such task. They provide an integrated solution using ROS to recognize and grasp simple objects. That system was deployed on an accelerator platform (Jetson TX1) to be able to perform object recognition in real time using RGB-D sensors attached to the robot. By using the system, the authors prove that the Pepper robot shows a great potential for such domestic assistance tasks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.