We present a process for rendering a realistic facial performance with control of viewpoint and illumination. The performance is based on one or more high-quality geometry and reflectance scans of an actor in static poses, driven by one or more video streams of a performance. We compute optical flow correspondences between neighboring video frames, and a sparse set of correspondences between static scans and video frames. The latter are made possible by leveraging the relightability of the static 3D scans to match the viewpoint(s) and appearance of the actor in videos taken in arbitrary environments. As optical flow tends to compute proper correspondence for some areas but not others, we also compute a smoothed, per-pixel confidence map for every computed flow, based on normalized cross-correlation. These flows and their confidences yield a set of weighted triangulation constraints among the static poses and the frames of a performance. Given a single artist-prepared face mesh for one static pose, we optimally combine the weighted triangulation constraints, along with a shape regularization term, into a consistent 3D geometry solution over the entire performance that is drift free by construction. In contrast to previous work, even partial correspondences contribute to drift minimization, for example, where a successful match is found in the eye region but not the mouth. Our shape regularization employs a differential shape term based on a spatially varying blend of the differential shapes of the static poses and neighboring dynamic poses, weighted by the associated flow confidences. These weights also permit dynamic reflectance maps to be produced for the performance by blending the static scan maps. Finally, as the geometry and maps are represented on a consistent artist-friendly mesh, we render the resulting high-quality animated face geometry and animated reflectance maps using standard rendering tools.
The performance of indoor localization methods is highly dependent on the situations in which they are used. Various competitions on indoor localization have been held for fairly comparing the existing indoor localization methods in shared and controlled testing environments. However, it is difficult to evaluate the practical performance in industrial scenarios through the existing competitions. This paper introduces two indoor localization competitions, which are named the “PDR Challenge in Warehouse Picking 2017” and “xDR Challenge for Warehouse Operations 2018” for tracking workers and vehicles in a warehouse scenario. For the PDR Challenge in Warehouse Picking 2017, we conducted a unique competition based on the data measured during the actual picking operation in an actual warehouse. We term the dead-reckoning of a vehicle as vehicle dead-reckoning (VDR), and the term “xDR” is derived from pedestrian dead-reckoning (PDR) plus VDR. As a sequel competition of the PDR Challenge in Warehouse Picking 2017, the xDR Challenge for Warehouse Operations 2018 was conducted as the world’s first competition that deals with tracking forklifts by VDR with smartphones. In the paper, first, we briefly summarize the existing competitions, and clarify the characteristics of our competitions by comparing them with other competitions. Our competitions have the unique capability of evaluating the practical performance in a warehouse by using the actual measured data as the test data and applying multi-faceted evaluation metrics. As a result, we successfully organize the competitions due to the many participants from many countries. As a conclusion of the paper, we summarize the findings of the competitions.
Every year, for ten years now, the IPIN competition has aimed at evaluating real-world indoor localisation systems by testing them in a realistic environment, with realistic movement, using the EvAAL framework. The competition provided a unique overview of the state-of-the-art of systems, technologies, and methods for indoor positioning and navigation purposes. Through fair comparison of the performance achieved by each system, the competition was able to identify the most promising approaches and to pinpoint the most critical working conditions. In 2020, the competition included 5 diverse off-site off-site Tracks, each resembling real use cases and challenges for indoor positioning. The results in terms of participation and accuracy of the proposed systems have been encouraging. The best performing competitors obtained a third quartile of error of 1 m for the Smartphone Track and 0.5 m for the Footmounted IMU Track. While not running on physical systems, but only as algorithms, these results represent impressive achievements.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.