2023
DOI: 10.3389/fpls.2022.1089454
|View full text |Cite
|
Sign up to set email alerts
|

YOLO-P: An efficient method for pear fast detection in complex orchard picking environment

Abstract: IntroductionFruit detection is one of the key functions of an automatic picking robot, but fruit detection accuracy is seriously decreased when fruits are against a disordered background and in the shade of other objects, as is commmon in a complex orchard environment.MethodsHere, an effective mode based on YOLOv5, namely YOLO-P, was proposed to detect pears quickly and accurately. Shuffle block was used to replace the Conv, Batch Norm, SiLU (CBS) structure of the second and third stages in the YOLOv5 backbone… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
18
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 20 publications
(18 citation statements)
references
References 43 publications
0
18
0
Order By: Relevance
“…▪UAV; RGB-D camera; YOLOv5sfor detection, and improved DeepLabv3+ (MobileNet v2) for semantic segmentation ▪ACC = 85.50% -94.52% ▪Grape detection, instance segmentation ▪RGB camera; Mask R-CNN with ResNet 101 as the backbone ▪F1 = 91% (Santos et al, 2020) ▪Pear (fruit) detection ▪RGB camera; YOLO-P F1 = 96.1% (Sun et al, 2023) be adapted in the next few years to aerial systems. For example, a great many algorithms exist for in-lane orchard navigation for ground autonomous systems (small-sized tractors, (Emmi et al, 2021)) and it should be possible to adapt them with minimal modifications.…”
Section: Discussionmentioning
confidence: 99%
“…▪UAV; RGB-D camera; YOLOv5sfor detection, and improved DeepLabv3+ (MobileNet v2) for semantic segmentation ▪ACC = 85.50% -94.52% ▪Grape detection, instance segmentation ▪RGB camera; Mask R-CNN with ResNet 101 as the backbone ▪F1 = 91% (Santos et al, 2020) ▪Pear (fruit) detection ▪RGB camera; YOLO-P F1 = 96.1% (Sun et al, 2023) be adapted in the next few years to aerial systems. For example, a great many algorithms exist for in-lane orchard navigation for ground autonomous systems (small-sized tractors, (Emmi et al, 2021)) and it should be possible to adapt them with minimal modifications.…”
Section: Discussionmentioning
confidence: 99%
“…For example, the authors ( Kestur et al., 2019 ) proposed a deep convolutional neural network architecture for mango detection using semantic segmentation named MangoNet. Also, the authors ( Koirala et al., 2021 ) call the network YOLO used MangoYOLO, and ( Sun et al., 2023 ) named YOLOP the modified YOLO v5 for pear fruit detection. The authors ( Kerkech et al., 2020 ) proposed a deep convolutional neural network architecture for vine disease detection named VddNet with a parallel architecture based on the VGG encoder.…”
Section: Applicationsmentioning
confidence: 99%
“…The research combined YOLOv7 with MobileNetV3 in extracting features and reducing parameters in identifying rice pests and diseases; the prediction accuracy reached 93.7% [8]. The research on improving the YOLOv5 model by using robots to detect pears in day and night states achieved an accuracy of 97.6%, 1.8% higher than the conventional method [9]. The study combined YOLOv5, ShuffleNet and MobileNet to achieve high accuracy in identifying diseases on peach leaves, resulting in an accuracy higher than 5.6% [10].…”
Section: Introductionmentioning
confidence: 99%