2021
DOI: 10.3390/rs14010046
|View full text |Cite
|
Sign up to set email alerts
|

Deep Convolutional Neural Network for Rice Density Prescription Map at Ripening Stage Using Unmanned Aerial Vehicle-Based Remotely Sensed Images

Abstract: In this paper, UAV (unmanned aerial vehicle, DJI Phantom4RTK) and YOLOv4 (You Only Look Once) target detection deep neural network methods were employed to collected mature rice images and detect rice ears to produce a rice density prescription map. The YOLOv4 model was used for rice ear quick detection of rice images captured by a UAV. The Kriging interpolation algorithm was used in ArcGIS to make rice density prescription maps. Mature rice images collected by a UAV were marked manually and used to build the … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 17 publications
(12 citation statements)
references
References 52 publications
0
12
0
Order By: Relevance
“…The main influencing factor in terms of UAV images was the spatial resolution of the images [4], which was an important indicator affecting the accuracy of sorghum head detection, and was mainly affected by the UAV flight altitude [8], which in this study was 20 m and the spatial resolution of the image was 1.1 cm. Compared to existing studies on crop spike detection and counting that commonly used millimeter-level (generally less than 5 mm) spatial resolution images [4,6,10,18,20,22,71], the sorghum head targets in this study were smaller and their boundaries were more blurred in the images used in this study at the centimeter level. The lower spatial resolution directly affected the learning performance of the DL methods [72,73], which undoubtedly hindered sorghum head detection and counting from the UAV images.…”
Section: Effects Of Other Factorsmentioning
confidence: 73%
See 3 more Smart Citations
“…The main influencing factor in terms of UAV images was the spatial resolution of the images [4], which was an important indicator affecting the accuracy of sorghum head detection, and was mainly affected by the UAV flight altitude [8], which in this study was 20 m and the spatial resolution of the image was 1.1 cm. Compared to existing studies on crop spike detection and counting that commonly used millimeter-level (generally less than 5 mm) spatial resolution images [4,6,10,18,20,22,71], the sorghum head targets in this study were smaller and their boundaries were more blurred in the images used in this study at the centimeter level. The lower spatial resolution directly affected the learning performance of the DL methods [72,73], which undoubtedly hindered sorghum head detection and counting from the UAV images.…”
Section: Effects Of Other Factorsmentioning
confidence: 73%
“…Compared to EfficientDet, which required higher clarity of sorghum heads, and the SSD network, which predicted targets for each feature layer individually, the YOLOv4 algorithm had better sorghum head detection results. The YOLO series of algorithms has been used for detecting corn plant seedlings [19], rice ears [10], cotton seedlings [63], cherry fruit [64], apples and apple flowers [49,51], and greenhouses [65], all of which have more extensive applications and better detection results. However, there are more convolutional layers stacked on each other in the CSPDarknet53 backbone network of YOLOv4, so that YOLOv4 had a larger number of parameters and floating-point computation, and its number of parameters was the largest among the three methods.…”
Section: Comparison Of Iou Thresholdsmentioning
confidence: 99%
See 2 more Smart Citations
“…In [ 35 ], with the growing availability of RGB data with extremely high spatial resolution, this study demonstrated the efficiency of a deep convolutional neural network technique for producing rice density prescription maps using UAV-based imagery. For counting and geolocation, their solution in [ 36 ] performs noticeably better than competing object detection techniques.…”
Section: Resultsmentioning
confidence: 96%