The recognition accuracy of traditional image recognition methods is heavily dependent on the design of complicated and tedious hand-crafted features. In view of the problems of poor accuracy and complicated feature extraction, this study presents a methodology for the estimation of the severity of wheat Fusarium head blight (FHB) with a small sample dataset based on transfer learning technology and convolutional neural networks (CNNs). Firstly, we utilized the potent feature learning and feature expression capabilities of CNNs to realize the automatic learning of FHB characteristics. Using transfer learning technology, VGG16, ResNet50, and MobileNetV1 models were pre-trained on the ImageNet. The knowledge was transferred to the estimation of FHB severity, and the fully connected (FC) layer of the models was modified. Secondly, acquiring the wheat images at the peak of the outbreak of FHB as the research object, after preprocessing for size filling on the wheat images, the image dataset was expanded with operations such as mirror flip, rotation transformation, and superimposed noise to improve the performance of the model and reduce the overfitting of models. Finally, under the Tensorflow deep learning framework, the VGG16, ResNet50, and MobileNetV1 models were subjected to transfer learning. The results showed that in the case of transfer learning and data augmentation, the ResNet50 model in Accuracy, Precision, Recall, and F1 score was better than the other two models, giving the highest accuracy of 98.42% and F1 score of 97.86%. The ResNet50 model had the highest recognition accuracy, providing technical support and reference for the accurate recognition of FHB.
Crop disease identification and monitoring is an important research topic in smart agriculture. In particular, it is a prerequisite for disease detection and the mapping of infected areas. Wheat fusarium head blight (FHB) is a serious threat to the quality and yield of wheat, so the rapid monitoring of wheat FHB is important. This study proposed a method based on unmanned aerial vehicle (UAV) low-altitude remote sensing and multispectral imaging technology combined with spectral and textural analysis to monitor FHB. First, the multispectral imagery of the wheat population was collected by UAV. Second, 10 vegetation indices (VIs)were extracted from multispectral imagery. In addition, three types of textural indices (TIs), including the normalized difference texture index (NDTI), difference texture index (DTI), and ratio texture index (RTI) were extracted for subsequent analysis and modeling. Finally, VIs, TIs, and VIs and TIs integrated as the input features, combined with k-nearest neighbor (KNN), the particle swarm optimization support vector machine (PSO-SVM), and XGBoost were used to construct wheat FHB monitoring models. The results showed that the XGBoost algorithm with the fusion of VIs and TIs as the input features has the highest performance with the accuracy and F1 score of the test set being 93.63% and 92.93%, respectively. This study provides a new approach and technology for the rapid and nondestructive monitoring of wheat FHB.
Aphis gossypii Glover is a major insect pest in cotton production, which can cause yield reduction in severe cases. In this paper, we proposed the A. gossypii infestation monitoring method, which identifies the infestation level of A. gossypii at the cotton seedling stage, and can improve the efficiency of early warning and forecasting of A. gossypii, and achieve precise prevention and cure according to the predicted infestation level. We used smartphones to collect A. gossypii infestation images and compiled an infestation image data set. And then constructed, trained, and tested three different A. gossypii infestation recognition models based on Faster Region-based Convolutional Neural Network (R-CNN), You Only Look Once (YOLO)v5 and single-shot detector (SSD) models. The results showed that the YOLOv5 model had the highest mean average precision (mAP) value (95.7%) and frames per second (FPS) value (61.73) for the same conditions. In studying the influence of different image resolutions on the performance of the YOLOv5 model, we found that YOLOv5s performed better than YOLOv5x in terms of overall performance, with the best performance at an image resolution of 640×640 (mAP of 96.8%, FPS of 71.43). And the comparison with the latest YOLOv8s showed that the YOLOv5s performed better than the YOLOv8s. Finally, the trained model was deployed to the Android mobile, and the results showed that mobile-side detection was the best when the image resolution was 256×256, with an accuracy of 81.0% and FPS of 6.98. The real-time recognition system established in this study can provide technical support for infestation forecasting and precise prevention of A. gossypii.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.