2023
DOI: 10.3390/agronomy13041042
|View full text |Cite
|
Sign up to set email alerts
|

RDE-YOLOv7: An Improved Model Based on YOLOv7 for Better Performance in Detecting Dragon Fruits

Abstract: There is a great demand for dragon fruit in China and Southeast Asia. Manual picking of dragon fruit requires a lot of labor. It is imperative to study the dragon fruit-picking robot. The visual guidance system is an important part of a picking robot. To realize the automatic picking of dragon fruit, this paper proposes a detection method of dragon fruit based on RDE-YOLOv7 to identify and locate dragon fruit more accurately. RepGhost and decoupled head are introduced into YOLOv7 to better extract features and… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 18 publications
(8 citation statements)
references
References 22 publications
0
5
0
Order By: Relevance
“…Zhu Lixue et al [26] proposed a dragon fruit image segmentation and attitude assessment method based on an improved U-Net to achieve a three-dimensional attitude assessment of dragon fruits, providing tech-nical support for the automated and accurate picking of intelligent dragon-fruit-picking robots, but it is not suitable in the following cases: in a picking environment with a small fruit size, high density, and complex environment, the visual recognition effect will be reduced. Jialiang Zhou et al [27] proposed a dragon fruit detection method based on RDE-YOLOv7 to identify and locate dragon fruits more accurately, which theoretically supports the development of dragon-fruit-picking robots, which have a very high recognition accuracy for a single fruit, but its visual recognition effect decreases in picking environments with a high fruit density and complex environment. Bin Zhang et al [28] suggested the integration of a lightweight network, specifically an enhanced version of YOLOv5s, to achieve the consistent detection of dragon fruits across diverse orchard conditions and weather patterns, and the improved model has good robustness, which offers a good solution for dragon-fruit-picking robotics, providing a theoretical foundation and technical support.…”
Section: Introductionmentioning
confidence: 90%
“…Zhu Lixue et al [26] proposed a dragon fruit image segmentation and attitude assessment method based on an improved U-Net to achieve a three-dimensional attitude assessment of dragon fruits, providing tech-nical support for the automated and accurate picking of intelligent dragon-fruit-picking robots, but it is not suitable in the following cases: in a picking environment with a small fruit size, high density, and complex environment, the visual recognition effect will be reduced. Jialiang Zhou et al [27] proposed a dragon fruit detection method based on RDE-YOLOv7 to identify and locate dragon fruits more accurately, which theoretically supports the development of dragon-fruit-picking robots, which have a very high recognition accuracy for a single fruit, but its visual recognition effect decreases in picking environments with a high fruit density and complex environment. Bin Zhang et al [28] suggested the integration of a lightweight network, specifically an enhanced version of YOLOv5s, to achieve the consistent detection of dragon fruits across diverse orchard conditions and weather patterns, and the improved model has good robustness, which offers a good solution for dragon-fruit-picking robotics, providing a theoretical foundation and technical support.…”
Section: Introductionmentioning
confidence: 90%
“…YOLOv7 is a real-time object detection algorithm ( Soeb et al., 2023 ), which has evolved from YOLOv5 and has faster inference speed, improved detection accuracy, and reduced computational complexity. The algorithm consists of three main parts: the input layer, backbone layer, and output layer ( Tang et al., 2023 ), and uses either a loss function with or without an auxiliary training head ( Zhou et al., 2023 ).…”
Section: Methodsmentioning
confidence: 99%
“…To evaluate the performance of the model, precision (P), recall (R), and mean average precision (mAP) are used [56]. These above metrics are calculated as follows:…”
Section: Evaluation Metricsmentioning
confidence: 99%