2022
DOI: 10.1016/j.compag.2022.106917
|View full text |Cite
|
Sign up to set email alerts
|

Active learning with MaskAL reduces annotation effort for training Mask R-CNN on a broccoli dataset with visually similar classes

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 21 publications
(11 citation statements)
references
References 15 publications
0
11
0
Order By: Relevance
“…Object detection methods can be divided into two categories: two stage and one stage. In the two-stage object detection, the objects are first localized and then classified, and the representative algorithms are R-CNN [11], Fast R-CNN, and R-FCN [12]. One-stage object detection regards object detection as a regression problem and performs localization and classification at the same time.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Object detection methods can be divided into two categories: two stage and one stage. In the two-stage object detection, the objects are first localized and then classified, and the representative algorithms are R-CNN [11], Fast R-CNN, and R-FCN [12]. One-stage object detection regards object detection as a regression problem and performs localization and classification at the same time.…”
Section: Related Workmentioning
confidence: 99%
“…After determining the depth of the Backbone network, on the basis of YOLOv5_16, Backbone adds the SPP pyramid pooling module, and Neck uses the FPN and PAN structure to fuse the features of the 8-fold and 16-fold downsampling layers, naming the model YOLOv5s_B. In order to explore the optimal pooling effect of SPP, four common SPP pooling kernels are tested in this paper: (3, 5, 7), (5,7,9), (7,9,13) and (9,11,13), named YOLOv5s_B_a, YOLOv5s_B_b, YOLOv5s_B_c, and YOLOv5s_B_d, respectively. Its structure is shown in Figure 9.…”
Section: Influence Of Feature Fusion On Detection Performancementioning
confidence: 99%
“…The pretreatment focused on conversions of full image datasets from RGB to four color spaces, preserving details and maintaining a balanced number of elements per group, which helps in the proper training of the neural network ( Ciocca, Napoletano & Schettini, 2018 ; Blok et al, 2022 ). This is similar to what was performed by Castro et al (2019a) who, using RGB images, extracted mean values and converted them to another L*a*b* and HSV in a previous stage of classifiers training, obtaining that the best combination was supported vector machine and RGB color space.…”
Section: Resultsmentioning
confidence: 99%
“…In the agricultural field, CNNs have been successfully applied in crop variety identification ( Too et al, 2019 ), haploid and diploid seeds ( Altuntaş, Cömert & Kocamaz, 2019 ), nematodes ( Abade et al, 2022 ), plant disease recognition ( Too et al, 2019 ), damage in milled rice grains ( Moses et al, 2022 ), broccoli head quality discrimination ( Blok et al, 2022 ), crop pest ( Ayan, Erbay & Varçın, 2020 ), microstructural elements discrimination ( Castro et al, 2019b ), characterization of emulsions ( Lu et al, 2021 ), among others. Using different common CNNs architectures (AlexNet, resNet, MobileNet, Inception, VGG16, DenseNet, among others), new architectures, and-or training approaches.…”
Section: Introductionmentioning
confidence: 99%
“…Most current agro-food robotics research aims to improve the perception capabilities of the systems, which increases the robots' understanding of a scene. In recent years, most of the research has been done in increasing detection and localization performance, especially through deep learning (Blok et al, 2022;Ruigrok et al, 2020). Although these methods have shown to be able to deal with variation to some extent, the challenge of occlusion is not adequately addressed (Montoya-Cavero et al, 2022;Kootstra et al, 2021).…”
Section: Introductionmentioning
confidence: 99%