Registration is an important step when processing three-dimensional (3-D) point clouds. Applications for registration range from object modeling and tracking, to simultaneous localization and mapping (SLAM). This article presents the open-source point cloud library (PCL) and the tools available for point cloud registration. The PCL incorporates methods for the initial alignment of point clouds using a variety of local shape feature descriptors, as well as methods for refining initial alignments using different variants of the well-known iterative closest point (ICP) algorithm. This article provides an overview on registration algorithms, usage examples of their PCL implementations, and tips for their application. Since the choice and parameterization of the right algorithm for a particular type of data is one of the biggest problems in 3-D point cloud registration, we present three complete examples of data (and applications) and the respective registration pipeline in the PCL. These examples include dense red-greenblue-depth (RGB-D) point clouds acquired by consumer color and depth cameras, high-resolution laser scans from commercial 3-D scanners, and low-resolution sparse point clouds captured by a custom lightweight 3-D scanner on a microaerial vehicle (MAV).
Registration of 3-D Point CloudsThe problem of consistently aligning two or more point clouds, i.e., sets of 3-D points, is inherent for 3-D registration. Often the point clouds are acquired by 3-D sensors from different viewpoints. The registration finds the relative pose (position and orientation) between views in a global coordinate frame, such that the overlapping areas between the point clouds match as well as possible; for two examples of registration see Figure 1. The overall objective of registration is to align individual point clouds and fuse them to a single point cloud so that subsequent
EPFLFigure 1: Our system creates a fully rigged 3D avatar of the user from uncalibrated video input acquired with a cell-phone camera. The blendshape models of the reconstructed avatars are augmented with textures and dynamic detail maps, and can be animated in realtime.
AbstractWe present a complete pipeline for creating fully rigged, personalized 3D facial avatars from hand-held video. Our system faithfully recovers facial expression dynamics of the user by adapting a blendshape template to an image sequence of recorded expressions using an optimization that integrates feature tracking, optical flow, and shape from shading. Fine-scale details such as wrinkles are captured separately in normal maps and ambient occlusion maps. From this user-and expression-specific data, we learn a regressor for on-the-fly detail synthesis during animation to enhance the perceptual realism of the avatars. Our system demonstrates that the use of appropriate reconstruction priors yields compelling face rigs even with a minimalistic acquisition system and limited user assistance. This facilitates a range of new applications in computer animation and consumer-level online communication based on personalized avatars. We present realtime application demos to validate our method.
In this paper, we give an overview of the Jacobs Robotics entry to the ICRA'11 Solutions in Perception Challenge. We present our multi-pronged strategy for object recognition and localization based on the integrated geometric and visual information available from the Kinect sensor. Firstly, the range image is over-segmented using an edge-detection algorithm and regions of interest are extracted based on a simple shape-analysis per segment. Then, these selected regions of the scene are matched with known objects using visual features and their distribution in 3D space. Finally, generated hypotheses about the positions of the objects are tested by back-projecting learned 3D models to the scene using estimated transformations and sensor model. Our method won the second place among eight competing algorithms, only marginally losing to the winner.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.