Abstract-This paper presents a method for pairwise 3D alignment which solves data association by matching scan segments across scans. Generating accurate segment associations allows to run a modified version of the Iterative Closest Point (ICP) algorithm where the search for point-to-point correspondences is constrained to associated segments. The novelty of the proposed approach is in the segment matching process which takes into account the proximity of segments, their shape, and the consistency of their relative locations in each scan. Scan segmentation is here assumed to be given (recent studies provide various alternatives [10], [19]). The method is tested on seven sequences of Velodyne scans acquired in urban environments. Unlike various other standard versions of ICP, which fail to recover correct alignment when the displacement between scans increases, the proposed method is shown to be robust to displacements of several meters. In addition, it is shown to lead to savings in computational times which are potentially critical in real-time applications.
This paper presents algorithms for fast segmentation of 3D point clouds and subsequent classification of the obtained 3D segments. The method jointly determines the ground surface and segments individual objects in 3D, including overhanging structures. When compared to six other terrain modelling techniques, this approach has minimal error between the sensed data and the representation; and is fast (processing a Velodyne scan in approximately 2 seconds). Applications include improved alignment of successive scans by enabling operations in sections (Velodyne scans are aligned 7% sharper compared to an approach using raw points) and more informed decision-making (paths move around overhangs). The use of segmentation to aid classification through 3D features, such as the Spin Image or the Spherical Harmonic Descriptor, is discussed and experimentally compared. Moreover, the segmentation facilitates a novel approach to 3D classification that bypasses feature extraction and directly compares 3D shapes via the ICP algorithm. This technique is shown to achieve accuracy on par with the best feature based classifier (92.1%) while being significantly faster and allowing a clearer understanding of the classifier's behaviour.
Abstract-This paper presents an algorithm for segmenting 3D point clouds. It extends terrain elevation models by incorporating two types of representations: (1) ground representations based on averaging the height in the point cloud, (2) object models based on a voxelisation of the point cloud. The approach is deployed on Riegl data (dense 3D laser data) acquired in a campus type of environment and compared against six other terrain models. Amongst elevation models, it is shown to provide the best fit to the data as well as being unique in the sense that it jointly performs ground extraction, overhang representation and 3D segmentation. We experimentally demonstrate that the resulting model is also applicable to path planning.
In this paper we address the problem of classifying objects in urban environments based on laser and vision data. We propose a framework based on Conditional Random Fields (CRFs), a flexible modeling tool allowing spatial and temporal correlations between laser returns to be represented. Visual features extracted from color imagery as well as shape features extracted from 2D laser scans are integrated in the estimation process. The paper contains the following novel developments: (1) a probabilistic formulation for the problem of exploiting spatial and temporal dependencies to improve classification; (2) three methods for classification in 2D semantic maps; (3) a novel semi-supervised learning algorithm to train CRFs from partially labeled data; (4) the combination of local classifiers with CRFs to perform feature selection on high-dimensional feature vectors. The system is extensively evaluated on two different datasets acquired in two different cities with different sensors. An accuracy of 91% is achieved on a seven-class problem. The classifier is also applied to the generation of a 3 km long semantic map.
Generating rich representations of environments can significantly improve the autonomy of mobile robotics. In this paper we introduce a novel approach to building object-type maps of outdoor environments. Our approach uses conditional random fields (CRF) to jointly classify laser returns in a 2D scan map into seven object types (car, wall, tree trunk, foliage, person, grass, and other). The spatial connectivity of the CRF model is determined via Delaunay triangulation of the laser map. Our model incorporates laser shape features, visual appearance features, structural information extracted from clusters of laser returns, and visual object detectors trained on image data sets available on the internet. The parameters of the CRF are trained from partially labeled laser and camera data collected by a car moving through an urban environment. Our approach achieves 91% accuracy in classifying objects observed along a 3 kilometer trajectory.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.