2022
DOI: 10.1007/s10846-021-01564-2
|View full text |Cite
|
Sign up to set email alerts
|

MGBM-YOLO: a Faster Light-Weight Object Detection Model for Robotic Grasping of Bolster Spring Based on Image-Based Visual Servoing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
9
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 10 publications
(9 citation statements)
references
References 22 publications
0
9
0
Order By: Relevance
“…It is necessary to be consistent with this feature when building a new backbone network of YOLOv3. In addition, when the input image dimension is 416 2 3 3, the YOLOv3 original backbone network will output three preliminary effective feature layers, whose dimensions are 52 2 3 256, 26 2 3 512, and 13 2 y¨C ¸1024 respectively. For the GhostNet backbone network, the three effective feature layer dimensions of its output are 52 2 3 40, 26 2 3 112, and 13 2 3 160.…”
Section: Network Architecture Of Ghostnet-yolov3mentioning
confidence: 99%
See 2 more Smart Citations
“…It is necessary to be consistent with this feature when building a new backbone network of YOLOv3. In addition, when the input image dimension is 416 2 3 3, the YOLOv3 original backbone network will output three preliminary effective feature layers, whose dimensions are 52 2 3 256, 26 2 3 512, and 13 2 y¨C ¸1024 respectively. For the GhostNet backbone network, the three effective feature layer dimensions of its output are 52 2 3 40, 26 2 3 112, and 13 2 3 160.…”
Section: Network Architecture Of Ghostnet-yolov3mentioning
confidence: 99%
“…In addition, when the input image dimension is 416 2 3 3, the YOLOv3 original backbone network will output three preliminary effective feature layers, whose dimensions are 52 2 3 256, 26 2 3 512, and 13 2 y¨C ¸1024 respectively. For the GhostNet backbone network, the three effective feature layer dimensions of its output are 52 2 3 40, 26 2 3 112, and 13 2 3 160. Therefore, when building the new backbone network of YOLOv3, we should also consider correctly adjusting the number of input channels when accessing the next volume layer, to ensure the correctness of the network.…”
Section: Network Architecture Of Ghostnet-yolov3mentioning
confidence: 99%
See 1 more Smart Citation
“…Recently, object detection NN based on the YOLO (You Only Look Once) algorithm have been applied for mobile robot visual navigation [25]- [28]. In [29] is introduced MGBM-YOLO, an application for visual servoing controller with object detection NN, where the authors propose two YOLOv3 models that are applied to the robotic grasping system of bolster spring based on image-based visual servoing. The MGBM-YOLO visual servoing controller presents a depth estimator depending on the actual area and the desired area of the object's bounding box, therefore, the system must know the object's dimensions beforehand.…”
Section: Introductionmentioning
confidence: 99%
“…To address the issue of accurate position and classification, the use of fusion methods is required. Liu et al [15] proposed an improved YOLO algorithm for the scratch of spring in the industrial servo system. By changing the lightweight structure of the main network to the MobileNetv3 and GhostNet, the feature extraction efficiency of the main network is improved, the visual servo system in a single phase is further optimized.…”
Section: Introductionmentioning
confidence: 99%