This paper presents a method for photo‐realistic animation that can be applied to any face shown in a single imageor a video. The technique does not require example data of the person's mouth movements, and the image to beanimated is not restricted in pose or illumination. Video reanimation allows for head rotations and speech in theoriginal sequence, but neither of these motions is required. In order to animate novel faces, the system transfers mouth movements and expressions across individuals, basedon a common representation of different faces and facial expressions in a vector space of 3D shapes and textures.This space is computed from 3D scans of neutral faces, and scans of facial expressions. The 3D model's versatility with respect to pose and illumination is conveyed to photo‐realistic image and videoprocessing by a framework of analysis and synthesis algorithms: The system automatically estimates 3D shape andall relevant rendering parameters, such as pose, from single images. In video, head pose and mouth movements aretracked automatically. Reanimated with new mouth movements, the 3D face is rendered into the original images. Categories and Subject Descriptors (according to ACM CCS): I.3.7 [Computer Graphics]: Animation
DCE-MRI represents a promising method for the assessment of disease activity in JIA, especially in patients with wrist arthritis. As far as we know, this study is the first to demonstrate the feasibility, reliability and construct validity of DCE-MRI in JIA. These results should be confirmed in large-scale longitudinal studies in view of its further application in therapeutic decision making and in clinical trials.
Purpose The quantitative analysis of contrast-enhanced Computed Tomography Angiography (CTA) is essential to assess aortic anatomy, identify pathologies, and perform preoperative planning in vascular surgery. To overcome the limitations given by manual and semi-automatic segmentation tools, we apply a deep learning-based pipeline to automatically segment the CTA scans of the aortic lumen, from the ascending aorta to the iliac arteries, accounting for 3D spatial coherence. Methods A first convolutional neural network (CNN) is used to coarsely segment and locate the aorta in the whole sub-sampled CTA volume, then three single-view CNNs are used to effectively segment the aortic lumen from axial, sagittal, and coronal planes under higher resolution. Finally, the predictions of the three orthogonal networks are integrated to obtain a segmentation with spatial coherence. Results The coarse segmentation performed to identify the aortic lumen achieved a Dice coefficient (DSC) of 0.92 ± 0.01. Single-view axial, sagittal, and coronal CNNs provided a DSC of 0.92 ± 0.02, 0.92 ± 0.04, and 0.91 ± 0.02, respectively. Multi-view integration provided a DSC of 0.93 ± 0.02 and an average surface distance of 0.80 ± 0.26 mm on a test set of 10 CTA scans. The generation of the ground truth dataset took about 150 h and the overall training process took 18 h. In prediction phase, the adopted pipeline takes around 25 ± 1 s to get the final segmentation. Conclusion The achieved results show that the proposed pipeline can effectively localize and segment the aortic lumen in subjects with aneurysm.
The registration of 3D scans of faces is a key step for many applications, in particular for building 3D Morphable Models. Although a number of algorithms are already available for registering data with neutral expression, the registration of scans with arbitrary expressions is typically performed under the assumption of a known, fixed identity. We present a novel algorithm which breaks this restriction, allowing to register 3D scans of faces with arbitrary identity and expression. Furthermore, our algorithm can process incomplete data, yielding results which are both continuous and with low reconstruction error. Even in the case of complete, expression-less data, our method can yield better results than previous algorithms, due to an adaptive smoothing, which regularizes the results surface only where the estimated correspondence is unreliable.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.