Three-dimensional digital models play a pivotal role in city planning, monitoring, and sustainable management of smart and Digital Twin Cities (DTCs). In this context, semantic segmentation of airborne 3D point clouds is crucial for modeling, simulating, and understanding large-scale urban environments. Previous research studies have demonstrated that the performance of 3D semantic segmentation can be improved by fusing 3D point clouds and other data sources. In this paper, a new prior-level fusion approach is proposed for semantic segmentation of large-scale urban areas using optical images and point clouds. The proposed approach uses image classification obtained by the Maximum Likelihood Classifier as the prior knowledge for 3D semantic segmentation. Afterwards, the raster values from classified images are assigned to Lidar point clouds at the data preparation step. Finally, an advanced Deep Learning model (RandLaNet) is adopted to perform the 3D semantic segmentation. The results show that the proposed approach provides good results in terms of both evaluation metrics and visual examination with a higher Intersection over Union (96%) on the created dataset, compared with (92%) for the non-fusion approach.
Change detection is an important step for the characterization of object dynamics at the earth’s surface. In multi-temporal point clouds, the main challenge is to detect true changes at different granularities in a scene subject to significant noise and occlusion. To better understand new research perspectives in this field, a deep review of recent advances in 3D change detection methods is needed. To this end, we present a comprehensive review of the state of the art of 3D change detection approaches, mainly those using 3D point clouds. We review standard methods and recent advances in the use of machine and deep learning for change detection. In addition, the paper presents a summary of 3D point cloud benchmark datasets from different sensors (aerial, mobile, and static), together with associated information. We also investigate representative evaluation metrics for this task. To finish, we present open questions and research perspectives. By reviewing the relevant papers in the field, we highlight the potential of bi- and multi-temporal point clouds for better monitoring analysis for various applications.
Semantic segmentation in a large-scale urban environment is crucial for a deep and rigorous understanding of urban environments. The development of Lidar tools in terms of resolution and precision offers a good opportunity to satisfy the need of developing 3D city models. In this context, deep learning revolutionizes the field of computer vision and demonstrates a good performance in semantic segmentation. To achieve this objective, we propose to design a scientific methodology involving a method of deep learning by integrating several data sources (Lidar data, aerial images, etc) to recognize objects semantically and automatically. We aim at extracting automatically the maximum amount of semantic information in a urban environment with a high accuracy and performance.
Semantic segmentation of Lidar data using Deep Learning (DL) is a fundamental step for a deep and rigorous understanding of large-scale urban areas. Indeed, the increasing development of Lidar technology in terms of accuracy and spatial resolution offers a best opportunity for delivering a reliable semantic segmentation in large-scale urban environments. Significant progress has been reported in this direction. However, the literature lacks a deep comparison of the existing methods and algorithms in terms of strengths and weakness. The aim of the present paper is therefore to propose an objective review about these methods by highlighting their strengths and limitations. We then propose a new approach based on the combination of Lidar data and other sources in conjunction with a Deep Learning technique whose objective is to automatically extract semantic information from airborne Lidar point clouds by enhancing both accuracy and semantic precision compared to the existing methods. We finally present the first results of our approach.
Digital Twin Cities (DTCs) play a fundamental role in city planning and management. They allow three-dimensional modeling and simulation of cities. 3D semantic segmentation is the foundation for automatically creating enriched DTCs, as well as their updates. Past studies indicate that prior level fusion approaches demonstrate more promising precisions in 3D semantic segmentation compared to point level fusion, features level fusion, and decision level fusion families. In order to improve point cloud enriched semantic segmentation outcomes, this article proposes a new approach for 3D point cloud semantic segmentation through developing and benchmarking three prior level fusion scenarios. A reference approach based on point clouds and aerial images was proposed to compare it with the different developed scenarios. In each scenario, we inject a specific prior knowledge (geometric features,classified images ,etc) and aerial images as attributes of point clouds into the neural network’s learning pipeline. The objective is to find the one that integrates the most significant prior knowledge and enhances neural network knowledge more profoundly, which we have named the "smart fusion approach". The advanced Deep Learning algorithm "RandLaNet" was adopted to implement the different proposed scenarios and the reference approach, due to its excellent performance demonstrated in the literature. The introduction of some significant features associated with the label classes facilitated the learning process and improved the semantic segmentation results that can be achievable with the same neural network alone. Overall, our contribution provides a promising solution for addressing some challenges, in particular more accurate extraction of semantically rich objects from the urban fabric. An assessment of the semantic segmentation results obtained by the different scenarios is performed based on metrics computation and visual investigations. Finally,the smart fusion approach was derived based on the obtained qualitative and quantitative results.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.