Abstract:The paper is concerned with the problem of multi-view three-dimensional (3D) point cloud registration. A novel global registration method is proposed to accurately register two series of scans into an object model underlying 3D imaging digitization by using the proposed oriented bounding box (OBB) regional area-based descriptor. A robot 3D scanning strategy is nowadays employed to generate the complete set of point cloud of physical objects by using 3D robot multi-view scanning and data registration. The automated operation has to successively digitize view-dependent area-scanned point cloud from complex-shaped objects by simultaneous determination of the next best robot pose and multi-view point cloud registration. To achieve this, the OBB regional area-based descriptor is employed to determine an initial transformation matrix and is then refined employing the iterative closest point (ICP) algorithm. The key technical breakthrough can resolve the commonly encountered difficulty in accurately merging two neighboring area-scanned images when no coordinate reference exists. To verify the effectiveness of the strategy, the developed method has been verified through some experimental tests for its registration accuracy. Experimental results have preliminarily demonstrated the feasibility and applicability of the developed method.
This paper presents a novel approach to recognize and estimate pose of the 3D objects in cluttered range images. The key technical breakthrough of the developed approach can enable robust object recognition and localization under undesirable condition such as environmental illumination variation as well as optical occlusion to viewing the object partially. First, the acquired point clouds are segmented into individual object point clouds based on the developed 3D object segmentation for randomly stacked objects. Second, an efficient shape-matching algorithm called Sub-OBB based object recognition by using the proposed oriented bounding box (OBB) regional area-based descriptor is performed to reliably recognize the object. Then, the 3D position and orientation of the object can be roughly estimated by aligning the OBB of segmented object point cloud with OBB of matched point cloud in a database generated from CAD model and 3D virtual camera. To detect accurate pose of the object, the iterative closest point (ICP) algorithm is used to match the object model with the segmented point clouds. From the feasibility test of several scenarios, the developed approach is verified to be feasible for object pose recognition and localization.
Conventional methods to autonomous grasping rely on a pre-computed database with known objects to synthesize grasps, which is not possible for novel objects. On the other hand, recently proposed deep learning-based approaches have demonstrated the ability to generalize grasp for unknown objects. However, grasp generation still remains a challenging problem, especially in cluttered environments under partial occlusion. In this work, we propose an end-to-end deep learning approach for generating 6-DOF collision-free grasps given a 3D scene point cloud. To build robustness to occlusion, the proposed model generates candidates by casting votes and accumulating evidence for feasible grasp configurations. We exploit contextual information by encoding the dependency of objects in the scene into features to boost the performance of grasp generation. The contextual information enables our model to increase the likelihood that the generated grasps are collision-free. Our experimental results confirm that the proposed system performs favorably in terms of predicting object grasps in cluttered environments in comparison to the current state of the art methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.