2017
DOI: 10.1515/fcds-2017-0011
|View full text |Cite
|
Sign up to set email alerts
|

3D Object Detection and Recognition for Robotic Grasping Based on RGB-D Images and Global Features

Abstract: Abstract. This paper describes the results of experiments on detection and recognition of 3D objects in RGB-D images provided by the Microsoft Kinect sensor. While the studies focus on single image use, sequences of frames are also considered and evaluated. Observed objects are categorized based on both geometrical and visual cues, but the emphasis is laid on the performance of the point cloud matching method. To this end, a rarely used approach consisting of independent VFH and CRH descriptors matching, follo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 16 publications
(4 citation statements)
references
References 35 publications
0
4
0
Order By: Relevance
“…The third group of experimental images contained two target crops that were occluded. In the case of ensuring that the object is at the effective measuring distance of the camera [25,26], try to make the background of each picture clear and concise, which can facilitate the reading of experimental data. Therefore, it is reasonable to crop the pictures that have a cluttered background and where it is not easy to see the 3D coordinate values, and intercept the parts of the figure that contain the coordinates with clear values.…”
Section: Experimental Process and Analysis Of Resultsmentioning
confidence: 99%
“…The third group of experimental images contained two target crops that were occluded. In the case of ensuring that the object is at the effective measuring distance of the camera [25,26], try to make the background of each picture clear and concise, which can facilitate the reading of experimental data. Therefore, it is reasonable to crop the pictures that have a cluttered background and where it is not easy to see the 3D coordinate values, and intercept the parts of the figure that contain the coordinates with clear values.…”
Section: Experimental Process and Analysis Of Resultsmentioning
confidence: 99%
“…The multiplication between the transformation matrix and the intrinsic matrix is the projection matrix (Proj), which has the 3 x 4 matrix dimension. The formulation is in (5).…”
Section: ) Projection Matrixmentioning
confidence: 99%
“…However, it was shown that the average error is more than 4 mm. Czajewski et al also deployed the Microsoft Kinect sensor to perform the point cloud matching method [5]. Viewpoint Feature Histogram (VFH) and Camera's Roll Histogram (CRH) descriptors matching continued by Iterative Closest Point (ICP) and Hypotheses Verification (HV) algorithms were used in this approach.…”
Section: Introductionmentioning
confidence: 99%
“…Some use algorithms for control [ 13 ] and others use visual images and machine learning to allow robotic arms to perform tasks [ 14 , 15 ]. Other studies convert images into point clouds that use depth distances for better task execution [ 16 , 17 , 18 , 19 , 20 , 21 , 22 , 23 , 24 ]. For example, the study in [ 25 ] combines object recognition by Mobile-DasNet and point cloud analysis to generate the coordinates of the arm endpoints for the apple picking task.…”
Section: Introductionmentioning
confidence: 99%