2022
DOI: 10.1016/j.engappai.2022.105164
|View full text |Cite
|
Sign up to set email alerts
|

Pose estimation and robotic insertion tasks based on YOLO and layout features

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 9 publications
(3 citation statements)
references
References 25 publications
0
3
0
Order By: Relevance
“…The relative pose of the master object in the robot's base frame can be obtained from extrinsic and intrinsic camera parameters by hand-eye calibration and YOLO-based detectors fine-tuned to the domain. The YOLO algorithm is widely used to detect objects in the image or video streams (Mou et al, 2022 ). For each object in the image I eth , the algorithm makes multiple predictions of bounding boxes that contain information concerning the object's position ( x, y ), size ( w, h ), confidence c con , and category c cate , as shown in Equation (2) .…”
Section: Methodsmentioning
confidence: 99%
“…The relative pose of the master object in the robot's base frame can be obtained from extrinsic and intrinsic camera parameters by hand-eye calibration and YOLO-based detectors fine-tuned to the domain. The YOLO algorithm is widely used to detect objects in the image or video streams (Mou et al, 2022 ). For each object in the image I eth , the algorithm makes multiple predictions of bounding boxes that contain information concerning the object's position ( x, y ), size ( w, h ), confidence c con , and category c cate , as shown in Equation (2) .…”
Section: Methodsmentioning
confidence: 99%
“…Similarly, a vision-based grasping system was proposed in 30 , where the neural network uses the point cloud of the grasp to classify the quality of the grasping, utilizing 3D scans and RGB-D pictures of known objects. Work in 31 proposed a novel grasping framework for sorting bottles in a complex environment, exploiting the Region Proposal Network (RPN) for object recognition and pose estimation 32 . Moreover, an intelligent grasping approach was proposed in 33 for picking objects from the floor using the method of highly accumulated features (HAF).…”
Section: Standard Vision-based Robotic Grasping Systemmentioning
confidence: 99%
“…With the rapid development of artificial intelligence technology, scholars begin to apply deep learning in the field of part target recognition. Recognition algorithms based on machine vision technology and convolutional neural network emerge endlessly, including R-CNN, VGG, YOLO, Fast R-CNN, Faster R-CNN [1,2,3,4,5] , etc. Because of its high precision, high speed, no contact as well as no loss, it has gradually replaced the traditional feature-matching classification method.…”
Section: Introductionmentioning
confidence: 99%