Standardization of evaluation techniques for building extraction is an unresolved issue in the fields of remote sensing, photogrammetry, and computer vision. In this paper, we propose a metric with a working title 'PoLiS metric' to compare two polygons. The PoLiS metric is a positive definite and symmetric function that satisfies a triangle inequality. It accounts for shape and accuracy differences between the polygons, is straightforward to apply, and requires no thresholds. We show through an example that the PoLiS metric between two polygons changes approximately linearly with respect to small translation, rotation, and scale changes. Furthermore, we compare building polygons extracted from a digital surface model to the reference building polygons by computing PoLiS, Hausdorff and Chamfer distances. The results show that quantification by PoLiS distance of dissimilarity between polygons is consistent with visual perception. What is more, Hausdorff and Chamfer distances overrate the dissimilarity when one polygon has more vertices than the other. We propose an approach towards standardizing building extraction evaluation, which may also have broader applications in the field of shape similarity.
ABSTRACT:Hyperspectral imaging sensors exibit high spectral resolution, but normally low spatial resolution. This leads to spectral signatures of pixels originating from different object types. Such pixels are called mixed pixels. Spectral unmixing methods can be employed to estimate the fractions of reflected light from the different objects within the pixel area. However, spectral unmixing does not provide any spatial information about the sources and therefore additional information is needed to precisely locate the sources. In order to restore the spatial information of hyperspectral images we propose a hyperspectral and multispectral image fusion method based on spectral unmixing. The algorithm is tested with HyMAP image data consisting of 125 spectral bands and a simulated multispectral image consisting of 8 bands.
ABSTRACT:This paper presents a novel workflow for data-driven building reconstruction from Light Detection and Ranging (LiDAR) point clouds. The method comprises building extraction, a detailed roof segmentation using region growing with adaptive thresholds, segment boundary creation, and a structural 3D building reconstruction approach using adaptive 2.5D Dual Contouring. First, a 2D-grid is overlain on the segmented point cloud. Second, in each grid cell 3D vertices of the building model are estimated from the corresponding LiDAR points. Then, the number of 3D vertices is reduced in a quad-tree collapsing procedure, and the remaining vertices are connected according to their adjacency in the grid. Roof segments are represented by a Triangular Irregular Network (TIN) and are connected to each other by common vertices or -at height discrepancies -by vertical walls. Resulting 3D building models show a very high accuracy and level of detail, including roof superstructures such as dormers. The workflow is tested and evaluated for two data sets, using the evaluation method and test data of the "ISPRS Test Project on Urban Classification and 3D Building Reconstruction" (Rottensteiner et al., 2012). Results show that the proposed method is comparable with the state of the art approaches, and outperforms them regarding undersegmentation and completeness of the scene reconstruction.
This paper proposes to use compression-based similarity measures to cluster spectral signatures on the basis of their similarities. Such universal distances estimate the shared information between two objects by comparing their compression factors, which can be obtained by any standard compressor. Experiments on rocks categorization show that these methods may outperform traditional choices for spectral distances based on vector processing.
An automatic quality assessment of extracted buildings from remote sensing imagery is needed to evaluate extraction algorithms, or to support change detection. In this paper, four commonly used measures are compared to the newly proposed metric for comparison of polygons and line segments (PoLiS). The extracted polygons are compared to the reference polygons and the quality measures are computed for each pair. The symmetric measures, i.e. quality rate and Po-LiS, estimate overall dissimilarity between polygons, whereas i.e. the root mean square error (RMSE) of the distances between the polygon vertices, completeness, and correctness, are not symmetric and should be therefore used for applications like change detection. The variability of the measures is assessed according to the area of the reference buildings. The variability is higher for the category of larger buildings, where the building polygon complexity is larger.
Data fusion techniques require a good registration of all the used datasets. In remote sensing, images are usually geo-referenced using the GPS and IMU data. However, if more precise registration is required, image processing techniques can be employed. We propose a method for multi-modal image coregistration between hyperspectral images (HSI) and digital surface models (DSM). The method is divided in three parts: object and line detection of the same object in HSI and DSM, line matching and determination of transformation parameters. Homogeneous coordinates are used to implement matching and adjustment of transformation parameters. The common object in HSI and DSM are building boundaries. They have apparent change in height and material, that can be detected in DSM and HSI, respectively. Thus, before the matching and transformation parameter computation, building outlines are detected and adjusted in HSI and DSM. We test the method on a HSI and two DSM, using extracted building outbounds and for comparison also extracted lines with a line detector. The results show that estimated building boundaries provide more line assignments, than using line detector.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.