New and constantly developing technology for acquiring spatial data, such as LiDAR (light detection and ranging), is a source for large volume of data. However, such amount of data is not always needed for developing the most popular LiDAR products: digital terrain model (DTM) or digital surface model. Therefore, in many cases, the number of contained points are reduced in the pre-processing stage. The degree of reduction is determined by the algorithm used, which should enable the user to obtain a dataset appropriate and optimal for the planned purpose. The aim of this article is to propose a new Optimum Dataset method (OptD method) in the processing of LiDAR point clouds. The OptD method can reduce the number of points in a dataset for the specified optimization criteria concerning the characteristics of generated DTM. The OptD method can be used in two variants: OptD-single (one criterion for optimization) and OptD-multi (two or more optimization criteria). The OptD-single method has been thoroughly tested and presented by Błaszczak-Bąk ( Acta Geodyn. Geomater. 13/4 379–86). In this paper the authors discussed the OptD-multi method.
Until now, the optimization of a large dataset acquired by means of the laser scanning technology was understood as reducing the number of data and finding a satisfactory solution. Generating Digital Terrain Model on the basis of the reduced dataset does not always lead to desired results or previously planned goals. Therefore, it is important that the algorithm which reduces large dataset, could find the optimal solution for creating the model. The objective of this paper is to develop and test a new OptD method (Optimum Dataset) in the processing of Airborne Laser Scanning point cloud. The algorithm of this method can reduce the dataset in terms of number of measuring points for a given criterion, such as e.g. mean error of the Digital Terrain Model.
Cooperative positioning (CP) utilises information sharing among multiple nodes to enable positioning in Global Navigation Satellite System (GNSS)-denied environments. This paper reports the performance of a CP system for pedestrians using Ultra-Wide Band (UWB) technology in GNSS-denied environments. This data set was collected as part of a benchmarking measurement campaign carried out at the Ohio State University in October 2017. Pedestrians were equipped with a variety of sensors, including two different UWB systems, on a specially designed helmet serving as a mobile multi-sensor platform for CP. Different users were walking in stop-and-go mode along trajectories with predefined checkpoints and under various challenging environments. In the developed CP network, both Peer-to-Infrastructure (P2I) and Peer-to-Peer (P2P) measurements are used for positioning of the pedestrians. It is realised that the proposed system can achieve decimetre-level accuracies (on average, around 20 cm) in the complete absence of GNSS signals, provided that the measurements from infrastructure nodes are available and the network geometry is good. In the absence of these good conditions, the results show that the average accuracy degrades to meter level. Further, it is experimentally demonstrated that inclusion of P2P cooperative range observations further enhances the positioning accuracy and, in extreme cases when only one infrastructure measurement is available, P2P CP may reduce positioning errors by up to 95%. The complete test setup, the methodology for development, and data collection are discussed in this paper. In the next version of this system, additional observations such as the Wi-Fi, camera, and other signals of opportunity will be included.
Autonomous navigation is an important task for unmanned vehicles operating both on the surface and underwater. A sophisticated solution for autonomous non-global navigational satellite system navigation is comparative (terrain reference) navigation. We present a method for fast processing of 3D multibeam sonar data to make depth area comparable with depth areas from bathymetric electronic navigational charts as source maps during comparative navigation. Recording the bottom of a channel, river, or lake with a 3D multibeam sonar data produces a large number of measuring points. A big dataset from 3D multibeam sonar is reduced in steps in almost real time. Usually, the whole data set from the results of a multibeam echo sounder results are processed. In this work, new methodology for processing of 3D multibeam sonar big data is proposed. This new method is based on the stepwise processing of the dataset with 3D models and isoline maps generation. For faster products generation we used the optimum dataset method which has been modified for the purposes of bathymetric data processing. The approach enables detailed examination of the bottom of bodies of water and makes it possible to capture major changes. In addition, the method can detect objects on the bottom, which should be eliminated during the construction of the 3D model. We create and combine partial 3D models based on reduced sets to inspect the bottom of water reservoirs in detail. Analyses were conducted for original and reduced datasets. For both cases, 3D models were generated in variants with and without overlays between them. Tests show, that models generated from reduced dataset are more useful, due to the fact, that there are significant elements of the measured area that become much more visible, and they can be used in comparative navigation. In fragmentary processing of the data, the aspect of present or lack of the overlay between generated models did not relevantly influence the accuracy of its height, however, the time of models generation was shorter for variants without overlay.
ALS point cloud filtering involves the separation of observations representing the physical terrain surface from those representing terrain details. A digital terrain model (DTM) is created from a subset of points representing the ground surface. The accuracy of the generated DTM is influenced by several factors, including the survey method used, the accuracy of the source data, the applied DTM generation algorithm, and the survey conditions. This article proposes the use of a new estimation method in the filtering of point clouds obtained from airborne laser scanning (ALS), provisionally called M splitestimation. The application of M split -estimation in ALS data filtering requires the determination of the appropriate functional model for the surface, which will be used in the filtering of the set of points. A polynomial terrain surface model was selected for this purpose. Two methods of filtering using the M split method are presented. The first is based on the estimated parameters of the polynomial describing the surface (called the 'quality' approach in the article). The second method (provisionally called the 'quantity' method) is carried out in two stages. The first stage is point cloud filtering, which results in two subsets being created. One of these is the subset of points intended for DTM creation, while the other contains the remaining points. The second stage of the approach is the creation of a DTM from the first subset.Since the M split method has an analytical character, the ATIN method was selected to verify the correct operation of the method. The ATIN method is based on computational geometry and uses repeated Delaunay triangulation and statistical evaluation of the geometric parameters. Comparison of M split with a method based on different principles mitigates errors arising from similarly functioning methods belonging to the same group of filters. The choice of the ATIN method was also dictated by its established position among filtering algorithms. The method is well-known, documented, and verified and this ensures that filtering by this method provides a reliable result that can serve as a reference for comparison with the proposed new filtering method.The theoretical discussion presented in this article was verified with two practical examples. The results obtained from computation by the M split method with appropriate terrain models encourage more detailed theoretical and empirical tests of this method for the filtering and segmentation of ALS data-sets.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.