a b s t r a c tThis paper proposes two robust statistical techniques for outlier detection and robust saliency features, such as surface normal and curvature, estimation in laser scanning 3D point cloud data. One is based on a robust z-score and the other uses a Mahalanobis type robust distance. The methods couple the ideas of point to plane orthogonal distance and local surface point consistency to get Maximum Consistency with Minimum Distance (MCMD). The methods estimate the best-fit-plane based on most probable outlier free, and most consistent, points set in a local neighbourhood. Then the normal and curvature from the best-fit-plane will be highly robust to noise and outliers. Experiments are performed to show the performance of the algorithms compared to several existing well-known methods (from computer vision, data mining, machine learning and statistics) using synthetic and real laser scanning datasets of complex (planar and non-planar) objects. Results for plane fitting, denoising, sharp feature preserving and segmentation are significantly improved. The algorithms are demonstrated to be significantly faster, more accurate and robust. Quantitatively, for a sample size of 50 with 20% outliers the proposed MCMD_Z is approximately 5, 15 and 98 times faster than the existing methods: uLSIF, RANSAC and RPCA, respectively. The proposed MCMD_MD method can tolerate 75% clustered outliers, whereas, RPCA and RANSAC can only tolerate 47% and 64% outliers, respectively. In terms of outlier detection, for the same dataset, MCMD_Z has an accuracy of 99.72%, 0.4% false positive rate and 0% false negative rate; for RPCA, RANSAC and uLSIF, the accuracies are 97.05%, 47.06% and 94.54%, respectively, and they have misclassification rates higher than the proposed methods. The new methods have potential for local surface reconstruction, fitting, and other point cloud processing tasks.
Segmentation is a most important intermediate step in point cloud data processing and understanding. Covariance statistics based local saliency features from Principal Component Analysis (PCA) are frequently used for point cloud segmentation. However it is well known that PCA is sensitive to outliers. Hence segmentation results can be erroneous and unreliable. The problems of surface segmentation in laser scanning point cloud data are investigated in this paper. We propose a region growing based statistically robust segmentation algorithm that uses a recently introduced fast Minimum Covariance Determinant (MCD) based robust PCA approach. Experiments for several real laser scanning datasets show that PCA gives unreliable and non-robust results whereas the proposed robust PCA based method gives more accurate and robust results for planar and non planar smooth surface segmentation.
Waikato samples were pre-treated following standard AMS protocols (UCI KCCAMS, 2011a, b). Following pre-treatment, charcoal (∼2 mm fragments) samples were converted to CO2 in sealed quartz tubes by oxidation at 800°C, using pre-baked CuO in the presence of silver wire to absorb any SOx and NOx produced. Shell (< 3 mm fragments, 35-45 mg) were etched in 0.1M HCl at 80°C to remove ∼45% of the surface. Cleaned shells were then tested for recrystallization by Feigl staining (Friedman, 1959) to ensure either aragonite, or a natural aragonite/calcite distribution was present in the shell (e.g. Nerita sp.). CO2 was collected from shells by reaction with 85% H3PO4. Cryogenically separated CO2 was then reduced to graphite with H2 at 550°C using an iron catalyst. δ 13 C was measured either on a LGR Isotope analyser CCIA-46EP or a Thermos Scientific MAT252 IRMS. Pressed graphite was analysed at the Keck Radiocarbon Dating Laboratory, University of California on a NEC 0.5MV 1.5SDH-2 AMS system (Southon et al., 2004). At ANSTO, after visual inspection for the presence of any powdery, potentially extraneous, calcite deposition shell surfaces were physically cleaned by abrasion of 10-25% of thickness with a Dremel ® tool followed by chemical etching of another 10% with 0.5M HCl for 1-5 minutes under sonication at room temperature (Hua et al., 2001). Feigl
This paper proposes robust methods for local planar surface fitting in 3D laser scan data. Searching through the literature revealed that many authors frequently used Least Squares (LS) and Principal Component Analysis (PCA) for point cloud processing without any treatment of outliers. It is known that LS and PCA are sensitive to outliers and can give inconsistent and misleading estimates. RANdom SAmple Consensus (RANSAC) is one of the most well-known robust methods used for model fitting when noise and outliers are present. We concentrate on the recently introduced Deterministic Minimum Covariance Determinant estimator and robust PCA, and propose two variants of statistically robust algorithms for fitting planar surfaces to 3D laser scanning point cloud data. The performance of the proposed robust methods is demonstrated by qualitative and quantitative analysis through several synthetic and mobile laser scanning 3D data sets for different applications. Using simulated data, and comparisons with LS, PCA, RANSAC, variants of RANSAC and other robust statistical methods, we demonstrate that the new algorithms are significantly more efficient, faster, and produce more accurate fits and robust local statistics (e.g., surface normals), necessary for many point cloud processing tasks. Consider one example dataset used consisting of 100 points with 20% outliers representing a plane. The proposed methods called DetRD-PCA and DetRPCA, produce bias angles (angle between the fitted planes with and without outliers) of 0.20 o and 0.24 o respectively, whereas LS, PCA and RANSAC produce worse bias angles of 52.49 o , 39.55 o and 0.79 o respectively. In terms of speed, DetRD-PCA takes 0.033s on average for fitting a plane, which is approximately 6.5, 25.4 and 25.8 times faster than RANSAC, and two other robust statistical methods, respectively. The estimated robust surface normals and curvatures from the new methods have been used for plane fitting, sharp feature preservation and segmentation in 3D point clouds obtained from laser scanners The results are significantly better and more efficiently computed than for existing methods.
The last decade has seen an exponential increase in the application of unmanned aerial vehicles (UAVs) to ecological monitoring research, though with little standardisation or comparability in methodological approaches and research aims. We reviewed the international peer-reviewed literature in order to explore the potential limitations on the feasibility of UAV-use in the monitoring of ecological restoration, and examined how they might be mitigated to maximise the quality, reliability and comparability of UAV-generated data. We found little evidence of translational research applying UAV-based approaches to ecological restoration, with less than 7% of 2133 published UAV monitoring studies centred around ecological restoration. Of the 48 studies, > 65% had been published in the three years preceding this study. Where studies utilised UAVs for rehabilitation or restoration applications, there was a strong propensity for single-sensor monitoring using commercially available RPAs fitted with the modest-resolution RGB sensors available. There was a strong positive correlation between the use of complex and expensive sensors (e.g., LiDAR, thermal cameras, hyperspectral sensors) and the complexity of chosen image classification techniques (e.g., machine learning), suggesting that cost remains a primary constraint to the wide application of multiple or complex sensors in UAV-based research. We propose that if UAV-acquired data are to represent the future of ecological monitoring, research requires a) consistency in the proven application of different platforms and sensors to the monitoring of target landforms, organisms and ecosystems, underpinned by clearly articulated monitoring goals and outcomes; b) optimization of data analysis techniques and the manner in which data are reported, undertaken in cross-disciplinary partnership with fields such as bioinformatics and machine learning; and c) the development of sound, reasonable and multi-laterally homogenous regulatory and policy framework supporting the application of UAVs to the large-scale and potentially trans-disciplinary ecological applications of the future.
While traditionally used for surveying and photogrammetric fields, laser scanning is increasingly being used for a wider range of more general applications. In addition to the issues typically associated with processing point data, such applications raise a number of new complications, such as the complexity of the scenes scanned, along with the sheer volume of data. Consequently, automated procedures are required for processing, and analysing such data. This paper introduces a method for modelling multi-modal, geometrically complex objects in terrestrial laser scanning point data; specifically, the modelling of trees. The model method comprises a number of geometric features in conjunction with a multi-modal machine learning technique. The model can then be used for contextually dependent region growing through separating the tree into its component part at the point level. Subsequently object analysis can be performed, for example, performing volumetric analysis of a tree by removing points associated with leaves. The workflow for this process is as follows: isolate individual trees within the scanned scene, train a Gaussian mixture model (GMM), separate clusters within the mixture model according to exemplar points determined by the GMM, grow the structure of the tree, and then perform volumetric analysis on the structure.
This paper introduces robust algorithms for extracting the ground points in laser scanning 3D point cloud data. Global polynomial functions have been used for filtering algorithms for point cloud data; however, it is not suitable as it may lead to bias for the filtering algorithms and can cause misclassification errors when many different objects are present. In this paper robust statistical approaches are coupled with locally weighted 2D regression that fits without any predefined global function for the variables of interest. Algorithms are performed iteratively on 2D profiles: x-z and y-z. The z (elevation) values are robustly down weighted based on the residuals for the fitted points. The new set of down-weighted z values along with corresponding x (or y) values are used to get a new fit for the lower surface level. The process of fitting and down-weighting continues until the difference between two consecutive fits is insignificant. The final fit is the required ground level and the ground surface points are those that fall within the ground level and the level after adding some threshold value with the ground level for z values. Experimental results are compared with recently proposed segmentation methods through simulated and real mobile laser scanning point clouds from urban areas that include many objects that appear in road scenes such as short walls, large buildings, electric poles, sign posts and cars. Results show the proposed robust method efficiently extracts ground surface points with better than 97% accuracy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.