The automatic generation of three-dimensional (3D) building models from geospatial data is now a standard procedure. An abundance of literature covers the last two decades, and several solutions are now available. However, urban areas are very complex environments. Inevitably, practitioners still have to visually assess, at a city-scale, the correctness of these models and detect frequent reconstruction errors. Such a process relies on experts and is highly time-consuming, with approximately two hours/km 2 per expert. This work proposes an approach for automatically evaluating the quality of 3D building models. Potential errors are compiled in a novel hierarchical and versatile taxonomy. This allows, for the first time, to disentangle fidelity and modeling errors, whatever the level of details of the modeled buildings. The quality of models is predicted using the geometric properties of buildings and, when available, Very High Resolution images and Digital Surface Models. A baseline of handcrafted, yet generic, features is fed into a Random Forest classifier. Both multiclass and multilabel cases are considered: due to the interdependence between classes of errors, it is possible to retrieve all errors at the same time while simply predicting correct and erroneous buildings. The proposed framework was tested on three distinct urban areas in France with more than 3000 buildings. 80%–99% F-score values are attained for the most frequent errors. For scalability purposes, the impact of the urban area composition on the error prediction was also studied, in terms of transferability, generalization, and representativeness of the classifiers. It showed the necessity of multimodal remote sensing data and mixing training samples from various cities to ensure a stability of the detection ratios, even with very limited training set sizes.
The automatic modeling of urban scenes in 3D from geospatial data has been studied for more than thirty years. However, the output models still have to undergo a tedious task of correction at city scale. In this work, we propose an approach for automatically evaluating the quality of 3D building models. A taxonomy of potential errors is first proposed. Handcrafted features are computed, based on the geometric properties of buildings and, when available, Very High Resolution images and depth data. They are fed into a Random Forest classifier for the prediction of the quality of the models. We tested our framework on three distinct urban areas in France. We can satisfactorily detect, on average 96% of the most frequent errors.
Abstract. Detecting planar structures in point clouds is a very central step of the point cloud processing pipeline as many Lidar scans, in particular in anthropic environments, present such planar structures. Many improvements have been proposed to RANSAC and the Hough transform, the two major types of plane detection methods. An important limitation however is that these methods detect planes running across the whole scene instead of more localized planar patches. Moreover, they do not exploit the sensor information that often comes with Lidar point cloud (sensor topology and optical center position in particular). In this paper we address both issues: we aim at detecting planar polygons that have a limited spatial extent, and we exploit sensor topology. The latter is used to enhance a RANSAC framework on two aspects: to make seed points selection more local and to define more compact sets of inliers through sensor space region growing.
The generation of 3D building models from Very High Resolution geospatial data is now an automatized procedure. However, urban areas are very complex and practitioners still have to visually assess the correctness of these models and detect reconstruction errors. We proposed an approach for automatically evaluating the quality of 3D building models. It is cast as a supervised classification task based on a hierarchical taxonomy and multimodal handcrafted features (building geometry, optical images, height data). In this paper, we evaluate how the urban area composition impacts prediction transferability and scalability of our framework to unseen scenes. This allows to define minimal feature and training sets for a problem where no benchmark data has been released so far.
Abstract. City modeling consists in building a semantic generalized model of the surface of urban objects. These could be seen as a special case of Boundary representation surfaces. Most modeling methods focus on 3D buildings with Very High Resolution overhead data (images and/or 3D point clouds). The literature abundantly addresses 3D mesh processing but frequently ignores the analysis of such models. This requires an efficient representation of 3D buildings. In particular, for them to be used in supervised learning tasks, such a representation should be scalable and transferable to various environments as only a few reference training instances would be available. In this paper, we propose two solutions that take into account the specificity of 3D urban models. They are based on graph kernels and Scattering Network. They are here evaluated in the challenging framework of quality evaluation of building models. The latter is formulated as a supervised multilabel classification problem, where error labels are predicted at building level. The experiments show for both feature extraction strategy strong and complementary results (F-score > 74% for most labels). Transferability of the classification is also examined in order to assess the scalability of the evaluation process yielding very encouraging scores (F-score > 86% for most labels).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.