ABSTRACT:Geological planar facets (stratification, fault, joint…) are key features to unravel the tectonic history of rock outcrop or appreciate the stability of a hazardous rock cliff. Measuring their spatial attitude (dip and strike) is generally performed by hand with a compass/clinometer, which is time consuming, requires some degree of censoring (i.e. refusing to measure some features judged unimportant at the time), is not always possible for fractures higher up on the outcrop and is somewhat hazardous. 3D virtual geological outcrop hold the potential to alleviate these issues. Efficiently segmenting massive 3D point clouds into individual planar facets, inside a convenient software environment was lacking. FACETS is a dedicated plugin within CloudCompare v2.6.2 (http://cloudcompare.org/) implemented to perform planar facet extraction, calculate their dip and dip direction (i.e. azimuth of steepest decent) and report the extracted data in interactive stereograms. Two algorithms perform the segmentation: Kd-Tree and Fast Marching. Both divide the point cloud into sub-cells, then compute elementary planar objects and aggregate them progressively according to a planeity threshold into polygons. The boundaries of the polygons are adjusted around segmented points with a tension parameter, and the facet polygons can be exported as 3D polygon shapefiles towards third party GIS software or simply as ASCII comma separated files. One of the great features of FACETS is the capability to explore planar objects but also 3D points with normals with the stereogram tool. Poles can be readily displayed, queried and manually segmented interactively. The plugin blends seamlessly into CloudCompare to leverage all its other 3D point cloud manipulation features. A demonstration of the tool is presented to illustrate these different features. While designed for geological applications, FACETS could be more widely applied to any planar objects.For further details: http://www.cloudcompare.org/doc/wiki/index.php?title=Facets_%28plugin%29
<p>Three-dimensional data have become increasingly present in earth observation over the last decades and, more recently, with the development of accessible 3D sensing technologies. However, many 3D surveys are still underexploited due to the lack of accessible and explainable automatic classification methods. In this work, we introduce explainable machine learning for 3D data classification using Multiple Attributes, Scales, and Clouds under 3DMASC, a new workflow. It handles multiple clouds at once, including or not spectral and multiple returns attributes. Through 3DMASC, we use classical 3D data multi-scale descriptors and new ones based on the spatial variations of geometrical, spectral and height-based features of the local point cloud. We also introduce dual-cloud features, encrypting local spectral and geometrical ratios and differences, which improve the interpretation of multi-cloud surveys. 3DMASC thus offers new possibilities for point cloud classification, namely for the interpretation of bi-spectral lidar data. Here, we experiment on topo-bathymetric lidar data, which are acquired using two lasers at infrared and green wavelengths, and feature two irregular point clouds characterized by different samplings of vegetated and flooded areas, that 3DMASC can harvest. By exploring the contributions of 88 features and 30 scales &#8211; including two types of neighborhoods &#8211; we identify a core set of features and scales particularly relevant for coastal and riverine scenes description, and give indications on how to build an optimal predictor vector to train 3D data classifiers. Our findings highlight the predominance of lidar return-based attributes over classical features based on dimensionality or eigenvalues, and the significant contribution of spectral information to the detection of more than a dozen of land and sea covers &#8211; artificial/vegetated/rocky/bare ground, rocky/sandy seabed, intermediate/high vegetation, buildings, vehicles, power lines. The experimental results show that 3DMASC competes with state-of-the-art methods in terms of classification performances while demanding lower complexity and thus remaining accessible to non-specialist users. Relying on a random forest algorithm, it generalizes and applies quickly to large datasets, and offers the possibility to filter out misclassified points depending on their prediction confidence. Classification accuracies between 91% for complex scene classifications and 98% for lower-level processing are observed, with average prediction confidences above 90% and models relying on less than 2000 samples per class and at most 30 descriptors &#8211; including both features and scales. Though dual-cloud features systematically outperform their single cloud equivalents, 3DMASC also performs on single cloud lidar data, or structure from motion point clouds. Our contributions are made available through a self-contained plugin in CloudCompare allowing non-specialist users to create a classifier and apply it, and an opensource labelled dataset of topo-bathymetric data.</p>
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.