Since the 1970s, land subsidence has been rapidly developing on the Beijing Plain, and the systematic study of the evolutionary mechanism of this subsidence is of great significance in the sustainable development of the regional economy. On the basis of Interferometric Synthetic Aperture Radar (InSAR) results, this study employed the Mann–Kendall method for the first time to determine the mutation information of land subsidence on the Beijing Plain from 2004 to 2015. By combining the hydrogeological conditions, “southern water” project, and other data, we attempted to analyse the reasons for land subsidence mutations. First, on the basis of ENVISAT ASAR and RADARSAT-2 data, the land subsidence of the Beijing Plain was determined while using small baseline interferometry (SBAS-InSAR) and Persistent Scatterers Interferometry (PSI). Second, on the basis of the Geographic Information System (GIS) platform, vector data of displacement under different scales were obtained. Through a series of tests, a scale of 960 metres was selected as the research unit and the displacement rate from 2004 to 2015 was obtained. Finally, a trend analysis of land subsidence was carried out on the basis of the Mann–Kendall mutation test. The results showed that single-year mutations were mainly distributed in the middle and lower parts of the Yongding River alluvial fan and the Chaobai River alluvial fan. Among these mutations, the greatest numbers occurred in 2015 and 2005, being 1344 and 915, respectively. The upper and middle alluvial fan of the Chaobai River, the vicinity of the emergency water sources, and the edge of the groundwater funnel have undergone several mutations. Combining hydrogeological data of the study area and the impact of the south-to-north water transfer project, we analysed the causes of these mutations. The experimental results can quantitatively verify the mutation information of land subsidence in conjunction with time series to further elucidate the spatial-temporal variation characteristics of land subsidence in the study area.
In close-range or unmanned aerial vehicle (UAV) photogrammetry, Schneider concentric circular coded targets (SCTs), which are public, are widely used for image matching and as ground control points. GSI point-distributed coded targets (GCTs), which are only mainly applied in a video-simultaneous triangulation and resection system (V-STARS), are non-public and rarely applied in UAV photogrammetry. In this paper, we present our innovative detailed solution to identify GCTs. First, we analyze the structure of a GCT. Then, a special 2D P2-invariant of five coplanar points derived from cross ratios is adopted in template point registration and identification. Finally, the affine transformation is used for decoding. Experiments indoors—including different viewing angles ranging from 0° to 80° based on 6 mm-diameter GCTs, smaller 3 mm-diameter GCTs, and different sizes mixed—and outdoors with challenging scenes were carried out. Compared with V-STARS, the results show that the proposed method can preserve the robustness and achieves a high accuracy rate in identification when the viewing angle is not larger than 65° through indoor experiments, and the proposed method can achieve approximate or slightly weaker effectiveness than V-STARS on the whole. Finally, we attempted to extend and apply the designed GCTs in UAV photogrammetry for a preliminary experiment. This paper demonstrates that GCTs can be designed, printed, and identified easily through our method. It is expected that the proposed method may be helpful when applied to image matching, camera calibration, camera orientation, or 3D measurements or serving as control points in UAV photogrammetry for scenarios with complex structures in the future.
Airborne laser scanning (ALS) point cloud classification is a challenge due to factors including complex scene structure, various densities, surface morphology, and the number of ground objects. A point cloud classification method is presented in this paper, based on content-sensitive multilevel objects (point clusters) in consideration of the density distribution of ground objects. The space projection method is first used to convert the three-dimensional point cloud into a two-dimensional (2D) image. The image is then mapped to the 2D manifold space, and restricted centroidal Voronoi tessellation is built for initial segmentation of content-sensitive point clusters. Thus, the segmentation results take the entity content (density distribution) into account, and the initial classification unit is adapted to the density of ground objects. The normalized cut is then used to segment the initial point clusters to construct content-sensitive multilevel point clusters. Following this, the point-based hierarchical features of each point cluster are extracted, and the multilevel point-cluster feature is constructed by sparse coding and latent Dirichlet allocation models. Finally, the hierarchical classification framework is created based on multilevel point-cluster features, and the AdaBoost classifiers in each level are trained. The recognition results of different levels are combined to effectively improve the classification accuracy of the ALS point cloud in the test process. Two scenes are used to experimentally test the method, and it is compared with three other state-of-the-art techniques.
Large-scale 3D point clouds are rich in geometric shape and scale information but they are also scattered, disordered and unevenly distributed. These characteristics lead to difficulties in learning point cloud semantic segmentations. Although many works have performed well in this task, most of them lack research on spatial information, which limits the ability to learn and understand the complex geometric structure of point cloud scenes. To this end, we propose the multispatial information and dual adaptive (MSIDA) module, which consists of a multispatial information encoding (MSI) block and dual adaptive (DA) blocks. The MSI block transforms the information of the relative position of each centre point and its neighbouring points into a cylindrical coordinate system and spherical coordinate system. Then the spatial information among the points can be re-represented and encoded. The DA blocks include a Coordinate System Attention Pooling Fusion (CSAPF) block and a Local Aggregated Feature Attention (LAFA) block. The CSAPF block weights and fuses the local features in the three coordinate systems to further learn local features, while the LAFA block weights the local aggregated features in the three coordinate systems to better understand the scene in the local region. To test the performance of the proposed method, we conducted experiments on the S3DIS, Semantic3D and SemanticKITTI datasets and compared the proposed method with other networks. The proposed method achieved 73%, 77.8% and 59.8% mean Intersection over Union (mIoU) on the S3DIS, Semantic3D and SemanticKITTI datasets, respectively.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.