This paper uses GeoEye-1 imagery and airborne lidar (Light Detection and Ranging) data to map buildings and their rubble in Port-au-Prince caused by the Haiti earthquake on 12 January 2010. This is achieved by performing an objectbased one-class-at-a-time land cover classification of the image and lidar data using spectral, textural and height information. Classification accuracy is about 87 percent overall, and approximately 80 percent for buildings and rubble. Comparison of manually-selected 200 actual damaged buildings within an area of two sq. km in the city center shows an accuracy of over 90 percent for building and rubble mapping. 3D building models for approximately 55,000 buildings covering an area of 30 sq. km over Port-au-Prince were generated. It is found that most of the damage is to the concrete and masonry structures in the well planned areas of the city and very little damage to the shelters and the temporary type houses with metal sheet roofs. The study demonstrates that fusing optical imagery and lidar data can effectively map the nature, severity, extent and damage patterns caused by earthquakes in densely populated urban areas like Port-au-Prince.
ABSTRACT:Transportation agencies require up-to-date, reliable, and feasibly acquired information on road geometry and features within proximity to the roads as input for evaluating and prioritizing new or improvement road projects. The information needed for a robust evaluation of road projects includes road centerline, width, and extent together with the average grade, cross-sections, and obstructions near the travelled way. Remote sensing is equipped with a large collection of data and well-established tools for acquiring the information and extracting aforementioned various road features at various levels and scopes. Even with many remote sensing data and methods available for road extraction, transportation operation requires more than the centerlines. Acquiring information that is spatially coherent at the operational level for the entire road system is challenging and needs multiple data sources to be integrated. In the presented study, we established a framework that used data from multiple sources, including one-foot resolution color infrared orthophotos, airborne LiDAR point clouds, and existing spatially non-accurate ancillary road networks. We were able to extract 90.25% of a total of 23.6 miles of road networks together with estimated road width, average grade along the road, and cross sections at specified intervals. Also, we have extracted buildings and vegetation within a predetermined proximity to the extracted road extent. 90.6% of 107 existing buildings were correctly identified with 31% false detection rate.
Abstract. Systematic errors may result from the adoption of an incomplete functional model that is not able to properly incorporate all the effects involved in the image formation process. These errors very likely appear as systematic residual patterns in image observations and produce deformations of the photogrammetric model in object space. The Brown/Beyer model of self-calibration is often adopted in underwater photogrammetry, although it does not take into account the refraction introduced by the passage of the optical ray through different media, i.e. air and water. This reduces the potential accuracy of photogrammetry underwater. In this work, we investigate through simulations the depth-dependent systematic errors introduced by unmodelled refraction effects when both flat and dome ports are used. The importance of camera geometry to reduce the deformation in the object space is analyzed and mitigation measures to reduce the systematic patterns in image observations are investigated. It is shown how, for flat ports, the use of a stochastic approach, consisting in radial weighting of image observations, improves the accuracy in object space up to 50%. Iterative look-up table corrections are instead adopted to reduce the evident systematic residual patterns in the case of dome ports.
ABSTRACT:Introducing an organization to the unstructured point cloud before extracting information from airborne lidar data is common in many applications. Aggregating the points with similar features into segments in 3-D which comply with the nature of actual objects is affected by the neighborhood, scale, features and noise among other aspects. In this study, we present a min-cut based method for segmenting the point cloud. We first assess the neighborhood of each point in 3-D by investigating the local geometric and statistical properties of the candidates. Neighborhood selection is essential since point features are calculated within their local neighborhood. Following neighborhood determination, we calculate point features and determine the clusters in the feature space. We adapt a graph representation from image processing which is especially used in pixel labeling problems and establish it for the unstructured 3-D point clouds. The edges of the graph that are connecting the points with each other and nodes representing feature clusters hold the smoothness costs in the spatial domain and data costs in the feature domain. Smoothness costs ensure spatial coherence, while data costs control the consistency with the representative feature clusters. This graph representation formalizes the segmentation task as an energy minimization problem. It allows the implementation of an approximate solution by min-cuts for a global minimum of this NP hard minimization problem in low order polynomial time. We test our method with airborne lidar point cloud acquired with maximum planned post spacing of 1.4 m and a vertical accuracy 10.5 cm as RMSE. We present the effects of neighborhood and feature determination in the segmentation results and assess the accuracy and efficiency of the implemented min-cut algorithm as well as its sensitivity to the parameters of the smoothness and data cost functions. We find that smoothness cost that only considers simple distance parameter does not strongly conform to the natural structure of the points. Including shape information within the energy function by assigning costs based on the local properties may help to achieve a better representation for segmentation.
ABSTRACT:LiDAR (Light Detection and Ranging) is a routinely employed technology as a 3-D data collection technique for topographic mapping. Conventional workflows for analyzing LiDAR data require the ground to be determined prior to extracting other features of interest. Filtering the terrain points is one of the fundamental processes to acquire higher-level information from unstructured LiDAR point data. There are many ground-filtering algorithms in literature, spanning several broad categories regarding their strategies. Most of the earlier algorithms examine only the local characteristics of the points or grids, such as the slope, and elevation discontinuities. Since considering only the local properties restricts the filtering performance due to the complexity of the terrain and the features, some recent methods utilize global properties of the terrain as well. This paper presents a new ground filtering method, Min-cut Based Filtering (MBF), which takes both local and global properties of the points into account. MBF considers ground filtering as a labeling task. First, an energy function is designed on a graph, where LiDAR points are considered as the nodes on the graph that are connected to each other as well as to two auxiliary nodes representing ground and off-ground labels. The graph is constructed such that the data costs are assigned to the edges connecting the points to the auxiliary nodes, and the smoothness costs to the edges between points. Data and smoothness terms of the energy function are formulated using point elevations and approximate ground information. The data term conducts the likelihood of the points being ground or off-ground while the smoothness term enforces spatial coherence between neighboring points. The energy function is optimized by finding the minimumcut on the graph via the alpha-expansion algorithm. The resulting graph-cut provides the labeling of the point cloud as ground and off-ground points. Evaluation of the proposed method on the ISPRS test dataset for ground filtering demonstrates that the results are comparable with most current existing methods. An overall average filtering accuracy for the 15 ISPRS test areas is 91.3%.
Abstract. Classification and segmentation of buildings from airborne lidar point clouds commonly involve point features calculated within a local neighborhood. The relative change of the features in the immediate surrounding of each point as well as the spatial relationships between neighboring points also need to be examined to account for spatial coherence. In this study we formulate the point labeling problem under a global graph-cut optimization solution. We construct the energy function through a graph representing a Markov Random Field (MRF). The solution to the labeling problem is obtained by finding the minimum-cut on this graph. We have employed this framework for three different labeling tasks on airborne lidar point clouds. Ground filtering, building classification, and roof-plane segmentation. As a follow-up study on our previous ground filtering work, this paper examines our building extraction approach on two airborne lidar datasets with different point densities containing approximately 930K points in one dataset and 750K points in the other. Test results for building vs. non-building point labeling show a 97.9% overall accuracy with a kappa value of 0.91 for the dataset with 1.18 pts/m2 average point density and a 96.8% accuracy with a kappa value of 0.90 for the dataset with 8.83 pts/m2 average point density. We can achieve 91.2% overall average accuracy in roof plane segmentation with respect to the reference segmentation of 20 building roofs involving 74 individual roof planes. In summary, the presented framework can successfully label points in airborne lidar point clouds with different characteristics for all three labeling problems we have introduced. It is robust to noise in the calculated features due to the use of global optimization. Furthermore, the framework achieves these results with a small training sample size.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.