Abstract:Point cloud local feature extraction places an important part of point cloud deep learning neural networks. Accurate extraction of point cloud features is still a challenge for deep learning networks. Oversampling and feature loss of point cloud model are important problems in the accuracy of image point cloud deep learning network. In this paper, we propose an adaptive clustering method for point cloud feature extraction—adaptive optimal means clustering (AOMC)—and apply it to point cloud deep learning networ… Show more
“…Finally, a minimum spanning tree of the feature points is established to construct the set of feature points. Zhang and Jin [28] proposed an adaptive optimal mean clustering (AOMC) method for point cloud feature extraction. (4) Deep learning algorithms.…”
Currently, a point cloud extraction method based on geometric features requires the configuration of two essential parameters: the neighborhood radius within the point cloud and the criterion for feature threshold selection. This article addresses the issue of manual selection of feature thresholds and proposes a feature extraction method for 3D point clouds based on the Otsu algorithm. Firstly, the curvature value of each point cloud is calculated based on the r-neighborhood of the point cloud data. Secondly, the Otsu algorithm is improved by taking the curvature values as input for the maximum inter-class variance method. The optimal segmentation threshold is obtained based on the Otsu algorithm to divide the point cloud data into two parts. Point cloud data whose curvature is greater than or equal to the threshold is extracted as feature point data. In order to verify the reliability of the algorithm presented in this paper, a method for accuracy assessment of regular point cloud data is proposed. Additionally, comparative analysis was conducted on data with varying point cloud densities and on data contaminated with Gaussian white noise using multiple methods. Experimental results show that the proposed algorithm achieves good extraction results for data with 90 percent simplification rate and low noise.
“…Finally, a minimum spanning tree of the feature points is established to construct the set of feature points. Zhang and Jin [28] proposed an adaptive optimal mean clustering (AOMC) method for point cloud feature extraction. (4) Deep learning algorithms.…”
Currently, a point cloud extraction method based on geometric features requires the configuration of two essential parameters: the neighborhood radius within the point cloud and the criterion for feature threshold selection. This article addresses the issue of manual selection of feature thresholds and proposes a feature extraction method for 3D point clouds based on the Otsu algorithm. Firstly, the curvature value of each point cloud is calculated based on the r-neighborhood of the point cloud data. Secondly, the Otsu algorithm is improved by taking the curvature values as input for the maximum inter-class variance method. The optimal segmentation threshold is obtained based on the Otsu algorithm to divide the point cloud data into two parts. Point cloud data whose curvature is greater than or equal to the threshold is extracted as feature point data. In order to verify the reliability of the algorithm presented in this paper, a method for accuracy assessment of regular point cloud data is proposed. Additionally, comparative analysis was conducted on data with varying point cloud densities and on data contaminated with Gaussian white noise using multiple methods. Experimental results show that the proposed algorithm achieves good extraction results for data with 90 percent simplification rate and low noise.
“…The ever-expanding volume of data presents an immense challenge in the modern era, calling for effective management of this abundance of information. In the realm of exploratory data analysis [1], [2], clustering emerges as a valuable tool across various domains, encompassing pattern recognition [3], feature extraction [4], vector quantization (VQ) [5], image segmentation [6], function approximation [7]. Data mining [8], [9].…”
With the rapid development of large models such as Chatgpt, text clustering has become an important research topic in data mining. However, traditional clustering algorithms face challenges in terms of text clustering due to the high dimensionality and directionality of text data; in particular, the research on web text mining is insufficient, so the accuracy and efficiency of clustering algorithms need to be improved. Aiming at the above challenges, this paper proposes a maximum entropy function model and applies it to web text clustering to overcome these challenges and achieve better clustering results. Unlike the traditional clustering algorithm, this algorithm avoids the local minimum and realizes the global minimum. This study will help strengthen web text mining and provide valuable insights for future research. In summary, this paper proposes a novel text clustering method, MEMC, which uses the maximum entropy function model to overcome the challenges of high-dimensional and directional features. Compared with the popular algorithms in the international standard datasets, the method is 15% higher than the current popular k-means algorithm in purity and 6% higher than the AP algorithm.
“…However, in the traditional fitting method, due to the huge amount of data, an approximation fitting algorithm is often used, or the designed fitting algorithm can only achieve a fitting effect on the data of a certain unique curve characteristic [3] .To solve this problem, we can think from two aspects. One is to adjust the fitted spline curve, that is, to connect data points with different "soft rulers" [4] . The other is to process the data points, and fit a complete curve with as few connection segments as possible under the premise of satisfying the fitting accuracy.…”
At present, curve and surface fitting is widely used in three-dimensional measurement, industrial design, archaeology, medicine and other fields, and curve and surface fitting has also become a hot spot and a difficulty at present. The surface point cloud data scanned by high-precision 3D laser scanning instruments on site are often complex, and the data are relatively dense for curves. If the approximation fitting is used, complex information may not be reflected enough, and the interpolation fitting may produce over-fitting phenomenon. This paper proposes a feature point selection algorithm, which is more targeted for dense point cloud data than the general cubic B-spline interpolation algorithm. The feature point selection algorithm can retain feature points and remove non-feature points and minimize the number of fitting segments on the premise of meeting the accuracy requirements of the final fitting curve.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.