2023
DOI: 10.3390/plants12173032
|View full text |Cite
|
Sign up to set email alerts
|

Lightweight Algorithm for Apple Detection Based on an Improved YOLOv5 Model

Yu Sun,
Dongwei Zhang,
Xindong Guo
et al.

Abstract: The detection algorithm of the apple-picking robot contains a complex network structure and huge parameter volume, which seriously limits the inference speed. To enable automatic apple picking in complex unstructured environments based on embedded platforms, we propose a lightweight YOLOv5-CS model for apple detection based on YOLOv5n. Firstly, we introduced the lightweight C3-light module to replace C3 to enhance the extraction of spatial features and boots the running speed. Then, we incorporated SimAM, a pa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(7 citation statements)
references
References 39 publications
0
6
0
Order By: Relevance
“…To ascertain the superior efficiency of the enhanced VEW-YOLOv8n algorithm, derived from YOLOv8n, it was compared with prevailing two-stage and single-stage target detection algorithms. The two-stage category included FasterRCNN, while the singlestage category included lightweight algorithms like YOLOv3-tiny [34], YOLOv5n [35], YOLOv6n [36], and YOLOv8n, along with high-precision, medium-large algorithms such as YOLOv8 m [37], and YOLOv3. Algorithms with larger convolutional kernels, namely YOLOv8n-InceptionNext and SSD target detection algorithms, were also compared.…”
Section: Discussionmentioning
confidence: 99%
“…To ascertain the superior efficiency of the enhanced VEW-YOLOv8n algorithm, derived from YOLOv8n, it was compared with prevailing two-stage and single-stage target detection algorithms. The two-stage category included FasterRCNN, while the singlestage category included lightweight algorithms like YOLOv3-tiny [34], YOLOv5n [35], YOLOv6n [36], and YOLOv8n, along with high-precision, medium-large algorithms such as YOLOv8 m [37], and YOLOv3. Algorithms with larger convolutional kernels, namely YOLOv8n-InceptionNext and SSD target detection algorithms, were also compared.…”
Section: Discussionmentioning
confidence: 99%
“…Conversely, our study aligns with the trajectory of employing DL techniques for fruit detection and extends it by integrating real-time video stream synchronization and tackling the challenges of environmental adaptability. Lastly, the study by Sun et al [ 37 ] presents a lightweight algorithm for apple detection based on an improved YOLOv5 model that achieves an impressive detection speed of 0.013 s/pic. However, our approach complements this by offering a system that focuses not only on lightweight model architecture that detects fruits relatively rapidly in real-time processing but also achieves a higher detection accuracy (a mAP rate of 86.8% by YOLOv5-v1, compared to 81.7% by Sun et al).…”
Section: Discussionmentioning
confidence: 99%
“…Finally, to evaluate and validate the performance of YOLOv5-v1, it was tested on 200 images from the test set and compared with recent related detection algorithms: (i) Mai et al [ 34 ] employed Faster R-CNN; (ii) Chu et al [ 35 ] adopted Mask R-CNN; (iii) Biffi et al [ 36 ] used ATSS, ResNet50, and FPN; (iv) Sun et al [ 37 ] utilized the modified YOLOv5-CS ( Table 8 ). The evaluation was performed based on the values of mAP and the average recognition speed.…”
Section: Case Studymentioning
confidence: 99%
See 1 more Smart Citation
“…Furthermore, because Detectron2 incorporates several widely employed deep learning models for object detection and instance segmentation, it possesses the potential for future compatibility with a broader range of agricultural and industrial production scenarios. These scenarios may include tasks like recognizing plant fructifications and identifying crop pests, extending its applicability beyond the sole measurement of rapeseed pod phenotype omics data [ 62 , 63 , 64 , 65 , 66 ]. By combining machine vision, we also determined the length, width, and two-dimensional image area of the rapeseed pods in the image using a single coin as a reference.…”
Section: Discussionmentioning
confidence: 99%