Object detection based on deep learning is a popular trend, and it includes object recognition and positioning. This paper proposes a method that can accurately obtain object type and accurate threedimensional position. The method is divided into three parts: object recognition and coarse positioning based on deep learning, precise positioning based on deep learning combined with B-spline level set in color images, and precise three-dimensional positioning with depth information of RGB-D camera. The precise positioning of the object provides accurate end pose information for the autonomous grasping of the robotic arm, and it has great significance to the gripping of the robotic arm. Performance metrics include mAP (mean average precision) and IOU (intersection of a union). Experimental results show that the mAP value of Yolo-v3 in this paper can reach 87.62%, the average IOU of Yolo-v3 in this paper can reach 66.74%, the average IOU of Yolo-v3 and B-spline level set can reach 100%, and can get accurate 3D location in the real scene. In addition, the experiments comparisons between VOC dataset and our own dataset validate that our dataset can take higher mAP and average IOU values.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.