The optical methods for 3D modelling of objects can be classified into two categories including image-based and range-based methods. Structure from Motion is one of the image-based methods implemented in commercial software. In this paper, a low-cost and portable system for 3D modelling of texture-less objects is proposed. This system includes a rotating table designed and developed by using a stepper motor and a very light rotation plate. The system also has eight laser light sources with very dense and strong beams which provide a relatively appropriate pattern on texture-less objects. In this system, regarding to the step of stepper motor, images are semi automatically taken by a camera. The images can be used in structure from motion procedures implemented in Agisoft software.To evaluate the performance of the system, two dark objects were used. The point clouds of these objects were obtained by spraying a light powders on the objects and exploiting a GOM laser scanner. Then these objects were placed on the proposed turntable. Several convergent images were taken from each object while the laser light sources were projecting the pattern on the objects. Afterward, the images were imported in VisualSFM as a fully automatic software package for generating an accurate and complete point cloud. Finally, the obtained point clouds were compared to the point clouds generated by the GOM laser scanner. The results showed the ability of the proposed system to produce a complete 3D model from texture-less objects.
3D point clouds are widely considered for applications in different fields. Various methods have been proposed to generate point cloud data: LIDAR and image matching from static and mobile platforms, including, e.g., Terrestrial Laser S canning (TLS ). With multiple point clouds from stationary platforms, point cloud registration is a crucial and fundamental issue. A standard approach is a point-based registration, which relies on pairs of corresponding points in twopoint clouds. Therefore, a necessary step in point-based registration is the construction of 3D local descriptors. One of the (many) challenges that will specifically affect the performance of local descriptors with local spatial information is the point displacement error. This error is caused by the difference in the distributions of points surrounding a (potentially) corresponding center point in the two-point clouds. It can occur for various reasons such as i) distortions caused by the sensors recording the data, ii) moving objects, iii) varying density of point cloud, iv) change of viewing angle, and v) different of the sensors. The purpose of this article is to develop a new 3D local descriptor reducing the effect of this type of error in point cloud coarse registration. The approach includes an improved Local Reference Frame (LRF) and a new geometric arrangement in point cloud space for the 3D local descriptor. Inspired by the 2D DAIS Y descriptor, a geometric arrangement is created to reduce the effect of the point displacement error. in addi tion, directional histograms are considered as features. Investigations are performed for point clouds from challenging environments, which are publicly available. The results of this study show the high performance of the proposed approach for point cloud registration, especially in more challenging and noisy environments.
Recent advances in 3D laser scanner technology have provided a large amount of accurate geo-information as point clouds. The methods of machine vision and photogrammetry are used in various applications such as medicine, environmental studies, and cultural heritage. Aerial laser scanners (ALS), terrestrial laser scanners (TLS), mobile mapping laser scanners (MLS), and photogrammetric cameras via image matching are the most important tools for producing point clouds. In most applications, the process of point cloud registration is considered to be a fundamental issue. Due to the high volume of initial point cloud data, 3D keypoint detection has been introduced as an important step in the registration of point clouds. In this step, the initial volume of point clouds is converted into a set of candidate points with high information content. Many methods for 3D keypoint detection have been proposed in machine vision, and most of them were based on thresholding the saliency of points, but less attention had been paid to the spatial distribution and number of extracted points. This poses a challenge in the registration process when dealing with point clouds with a homogeneous structure. As keypoints are selected in areas of structural complexity, it leads to an unbalanced distribution of keypoints and a lower registration quality. This research presents an automated approach for 3D keypoint detection to control the quality, spatial distribution, and the number of keypoints. The proposed method generates a quality criterion by combining 3D local shape features, 3D local self-similarity, and the histogram of normal orientation and provides a competency index. In addition, the Octree structure is applied to control the spatial distribution of the detected 3D keypoints. The proposed method was evaluated for the keypoint-based coarse registration of aerial laser scanner and terrestrial laser scanner data, having both cluttered and homogeneous regions. The obtained results demonstrate the proper performance of the proposed method in the registration of these types of data, and in comparison to the standard algorithms, the registration error was diminished by up to 56%.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.