2022
DOI: 10.1016/j.compag.2022.107477
|View full text |Cite
|
Sign up to set email alerts
|

Maize tassel area dynamic monitoring based on near-ground and UAV RGB images by U-Net model

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
11
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 21 publications
(11 citation statements)
references
References 37 publications
0
11
0
Order By: Relevance
“…In contrast, refs. [39,40,43] directly use the VGG16 network as the encoding network for U-Net without simplifying VGG16, which also achieves better segmentation results but increases the width and depth of the network and requires a more optimal environment configuration to run the model. Chen et al [53] achieved the accurate segmentation of grains, branches, and straws in hybrid rice grain images by improving the U-Net model, but the improvement they made was still to make the model extract richer semantic information by increasing the depth of the model.…”
Section: Comparison Of the Overall Accuracy Of The Modelmentioning
confidence: 99%
See 1 more Smart Citation
“…In contrast, refs. [39,40,43] directly use the VGG16 network as the encoding network for U-Net without simplifying VGG16, which also achieves better segmentation results but increases the width and depth of the network and requires a more optimal environment configuration to run the model. Chen et al [53] achieved the accurate segmentation of grains, branches, and straws in hybrid rice grain images by improving the U-Net model, but the improvement they made was still to make the model extract richer semantic information by increasing the depth of the model.…”
Section: Comparison Of the Overall Accuracy Of The Modelmentioning
confidence: 99%
“…VGG16 is a typical structure in the VGG network and is frequently used as a feature extraction network for U-Net since it is well-suited for classification and localization tasks. Yu et al [39] investigated the potential of the U-Net model to segment maize tassel, and the results showed that the segmentation accuracy of the U-Net model with VGG16 as the feature extraction network for tassels at the all the tasseling stages was better than that of U-Net model with MobileNet; Sugirtha et al [40] also confirmed that U-Net with the VGG16 encoder shows better performance than the ResNet-50 encoder when segmenting urban streets. In order to accomplish the reliable detection of navigation lines in different growth periods of potato, Yang et al [41] presented a fitting approach of feature midpoint modification and replaced the original U-Net's feature extraction structure with VGG16; Zou et al [42] proposed an image-enhancement method based on the random synthesis of "foreground" and "background", and reduced the number of convolutional layers in the U-Net model network, achieving the semantic segmentation of field weed images.…”
Section: Introductionmentioning
confidence: 99%
“…The experiment conducted by Deb et al Utilized a dataset of over 2010 images from the plant phenotyping (CVPPP) and KOMATSUNA datasets. In another study, (6) examined the accuracy differences of the U-Net model when applied to images of various maize varieties, different stages of tasseling, and different spatial resolutions. In addition to the aforementioned techniques, object detection represents another crucial computer vision approach that enables the identification of objects in images or videos by drawing a bounding box around them.…”
Section: Introductionmentioning
confidence: 99%
“…The model aims to improve the accuracy and efficiency of winter wheat ear segmentation and achieves an F1 measurement of 87.25%. Yu et al (2022) proposed an Unmanned Aerial Vehicle (UAV) tassel image recognition algorithm based on the U-Net model. The algorithm combines lightweight and heavy extraction networks, striking a balance between accuracy and speed with a relative mean squared error RMSE of 4.4.…”
Section: Introductionmentioning
confidence: 99%