In the context of rapid urbanization, monitoring the evolution of cities is crucial. To do so, 3D change detection and characterization is of capital importance since, unlike 2D images, 3D data contain vertical information of utmost importance to monitoring city evolution (that occurs along both horizontal and vertical axes). Urban 3D change detection has thus received growing attention, and various methods have been published on the topic. Nevertheless, no quantitative comparison on a public dataset has been reported yet. This study presents an experimental comparison of six methods: three traditional (difference of DSMs, C2C and M3C2), one machine learning with hand-crafted features (a random forest model with a stability feature) and two deep learning (feed-forward and Siamese architectures). In order to compare these methods, we prepared five sub-datasets containing simulated pairs of 3D annotated point clouds with different characteristics: from high to low resolution, with various levels of noise. The methods have been tested on each sub-dataset for binary and multi-class segmentation. For supervised methods, we also assessed the transfer learning capacity and the influence of the training set size. The methods we used provide various kinds of results (2D pixels, 2D patches or 3D points), and each of them is impacted by the resolution of the PCs. However, while the performances of deep learning methods highly depend on the size of the training set, they seem to be less impacted by training on datasets with different characteristics. Oppositely, conventional machine learning methods exhibit stable results, even with smaller training sets, but embed low transfer learning capacities. While the main changes in our datasets were usually identified, there were still numerous instances of false detection, especially in dense urban areas, thereby calling for further development in this field. To assist such developments, we provide a public dataset composed of pairs of point clouds with different qualities together with their change-related annotations. This dataset was built with an original simulation tool which allows one to generate bi-temporal urban point clouds under various conditions.
A microwave emissivity retrieval is applied to five years of Global Precipitation Measurement (GPM) Microwave Imager (GMI) observations over land and sea ice. The emissivities are co-located with GPMs Dual-frequency Precipitation Radar (DPR) surface backscatter measurements in clear-sky conditions. The emissivity-backscatter database is used to characterize surfaces within the GPM orbit for precipitation retrieval algorithms and other applications.The full 10-166 GHz emissivity vector is retrieved using optimal estimation. Since GMI includes water vapor sounding channels, retrieval of the atmospheric and surface state are performed simultaneously. Using the MERRA2 reanalysis as the a priori atmospheric state and with proper characterization of its error, we are able to effectively screen for cloud-and precipitation-affected emissivities. Comparisons with co-located CloudSat data show that this GMI-based screen is able to detect precipitation that DPR alone does not; however, about 10% of precipitation occurrence from CloudSat is still undetected by GMI.The unsupervised Kohonen classification technique was then applied to multi-year monthly 0.25 • gridded mean retrieved emissivities and backscatter distinctly for snow-free, snow-covered, and sea ice surfaces in order to classify surfaces based on both active and passive microwave characteristics. The classes correspond to vegetation coverage and type, inundation zones, soil composition, and terrain roughness. Snow and sea ice surfaces show clear seasonal cycles representing the increase in snow and ice spatial extent and reduction in the spring. Applications toward GPM precipitation retrieval algorithms and sensitivity to accumulated rain and snowfall are also explored.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.