Abstract-Automated 3D modeling of building interiors is useful in applications such as virtual reality and entertainment. Using a human-operated backpack system equipped with 2D laser scanners and inertial measurement units (IMU), we develop scan matching based algorithms to localize the backpack in complex indoor environments such as a T-shaped corridor intersection, a staircase, and two indoor hallways from two separate floors connected by a staircase. When building 3D textured models, we find that the localization resulting from scan matching is not pixel accurate, resulting in misalignment between successive images used for texturing. To address this, we propose an image based pose estimation algorithm to refine the results from our scan matching based localization. Finally, we use the localization results within an image based renderer to enable virtual walkthroughs of indoor environments using imagery from cameras on the same backpack. Our renderer uses a three-step process to determine which image to display, and a RANSAC framework to determine homographies to mosaic neighboring images with common SIFT features. In addition, our renderer uses plane-fitted models of the 3D point cloud resulting from the laser scans to detect occlusions. We characterize the performance of our image based renderer on an unstructured set of 2709 images obtained during a five minute backpack data acquisition for a T-shaped corridor intersection.
The classification of urban landscape in aerial LiDAR point clouds is useful in 3D modeling and object recognition applications in urban environments. In this paper, we introduce a multi-category classification system for identifying water, ground, roof, and trees in airborne LiDAR. The system is organized as a cascade of binary classifiers, each of which performs unsupervised region growing followed by supervised, segment-wise classification. Categories with the most discriminating features, such as water and ground, are identified first and are used as context for identifying more complex categories, such as trees. We use 3D shape analysis and region growing to identify "planar" and "scatter" regions that likely correspond to ground/roof and trees respectively. We demonstrate results on two urban datasets, the larger of which contains 200 million LiDAR returns over 7km 2 . We show that our ground, roof, and tree classifiers, when trained on one dataset, perform well on the other dataset.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.