Wheat is a very important food crop for mankind. Many new varieties are bred every year. The accurate judgment of wheat varieties can promote the development of the wheat industry and the protection of breeding property rights. Although gene analysis technology can be used to accurately determine wheat varieties, it is costly, time-consuming, and inconvenient. Traditional machine learning methods can significantly reduce the cost and time of wheat cultivars identification, but the accuracy is not high. In recent years, the relatively popular deep learning methods have further improved the accuracy on the basis of traditional machine learning, whereas it is quite difficult to continue to improve the identification accuracy after the convergence of the deep learning model. Based on the ResNet and SENet models, this paper draws on the idea of the bagging-based ensemble estimator algorithm, and proposes a deep learning model for wheat classification, CMPNet, which is coupled with the tillering period, flowering period, and seed image. This convolutional neural network (CNN) model has a symmetrical structure along the direction of the tensor flow. The model uses collected images of different types of wheat in multiple growth periods. First, it uses the transfer learning method of the ResNet-50, SE-ResNet, and SE-ResNeXt models, and then trains the collected images of 30 kinds of wheat in different growth periods. It then uses the concat layer to connect the output layers of the three models, and finally obtains the wheat classification results through the softmax function. The accuracy of wheat variety identification increased from 92.07% at the seed stage, 95.16% at the tillering stage, and 97.38% at the flowering stage to 99.51%. The model’s single inference time was only 0.0212 s. The model not only significantly improves the classification accuracy of wheat varieties, but also achieves low cost and high efficiency, which makes it a novel and important technology reference for wheat producers, managers, and law enforcement supervisors in the practice of wheat production.
The innovation of germplasm resources and the continuous breeding of new varieties of apples (Malus domestica Borkh.) have yielded more than 8000 apple cultivars. The ability to identify apple cultivars with ease and accuracy can solve problems in apple breeding related to property rights protection to promote the healthy development of the global apple industry. However, the existing methods are inconsistent and time-consuming. This paper proposes an efficient and convenient method for the classification of apple cultivars using a deep convolutional neural network with leaf image input, which is the delicate symmetry of a human brain learning. The model was constructed using the TensorFlow framework and trained on a dataset of 12,435 leaf images for the identification of 14 apple cultivars. The proposed method achieved an overall accuracy of 0.9711 and could successfully avoid the over-fitting problem. Tests on an unknown independent testing set resulted in a mean accuracy, mean error, and variance of μ a c c = 0.9685 , μ ε = 0.0315 , and σ 2 = 1.89025 E − 4 , respectively, indicating that the generalization accuracy and stability of the model were very good. Finally, the classification performance for each cultivar was tested. The results show that model had an accuracy of 1.0000 for Ace, Hongrouyouxi, Jazz, and Honey Crisp cultivars, and only one leaf was incorrectly identified for 2001, Ada Red, Jonagold, and Gold Spur cultivars, with accuracies of 0.9787, 0.9800, 0.9773, and 0.9737, respectively. Jingning1 and Pinova cultivars were classified with the lowest accuracies, with 0.8780 and 0.8864, respectively. The results also show that the genetic relationship between cultivars Shoufu 3 and Yanfu 3 is very high, which is mainly because they were both selected from a red mutation of Fuji and bred in Yantai City, Shandong Province, China. Generally, this study indicates that the proposed deep learning model is a novel and improved solution for apple cultivar identification, with high generalization accuracy, stable convergence, and high specificity.
With the continuous innovation and development of technologies for breeding varieties of fruits, there are more than 8000 varieties of apples in existence. The accurate identification of apple varieties can promote the healthy and stable development of the global apple industry and protect the breeding property rights of rights-holders. To avoid economic losses due to the improper identification of varieties at the seedling-procurement stage, this paper proposes the classification of varieties using images of apple leaves in conjunction with the network models of traditional classification methods, supplemented with deep-learning methods, such as AlexNet, VGG, and ResNet, to account for their shortcomings in robustness and generalizability. We used the Multi-Attention Fusion Convolutional Neural Network (MAFNet) classification method for apple leaf images. The convolutional block distribution pattern of [2,2,2,2] is used to drive the feature extraction layer to have a symmetric structure. According to the characteristics of the dataset, the model is based on the ResNet model to optimize the feature extraction module and integrate a variety of attention mechanisms to achieve the weight distribution of channel features, reduce the interference information before and after feature extraction, complete the accurate extraction of image features, from low-dimensional to high-dimensional, and finally obtain the apple classification results through the Softmax function. The experiments were conducted on a mixture of leaves from 30 apple varieties at 2 growth stages: tender and mature. A total of 14,400 images were used for training, 2400 for validation, and 7200 for testing. The model’s classification accuracy was 98.14%, which improved the accuracy and reduced the classification imputation time as compared with the previous model. Among them, the accuracy rate of “Red General”, “SinanoGold”, and “Jonagold” reached 100%, and the accuracy rate of the bud variant of the Fuji line (“Fuji 2001”, “Red General”, “Yanfu 0”. and “Yanfu 3”) also had an accuracy rate of over 90%. The method proposed in this paper not only significantly improves the classification accuracy of apple cultivars, but it also achieves this with a low cost and a high efficiency level, providing a new way of thinking and an essential technical reference for apple cultivar identification by growers, operators, and law enforcement supervisors in the production practice.
In order to solve the poor performance of real-time semantic segmentation of road conditions in video images due to insufficient light and motion blur when vehicles are driving at night, This study proposes a scheme: a fuzzy information complementation strategy based on generative models and a network that fuses different intermediate layer outputs to complement spatial semantics with also embeds irregular convolutional attention modules for fine extraction of motion target boundaries. First, DeblurGan is used to generate information to fix the lost semantics in the original image due to blurring; then, the outputs of different intermediate layers in the backbone network are taken out, assigned different weight scaling factors and fused; finally, by comparing the performance of different attention mechanisms, the irregular convolutional attention with the best effect is selected. The scheme achieves Global Accuracy:89.1% Mean IOU:94.2% on the night driving dataset of this experiment, which exceeds the best performance of DeepLabv3 by 1.3% and 7.2%, and achieves Accuracy:83.0% on the small volume label (Moveable), which is underperformed by all control models. The experimental results demonstrate that the solution can effectively cope with various problems faced by night driving and enhance the model's perception and analysis of driving road conditions. The results of the study provide a technical reference for the semantic segmentation problem of vehicles driving in the nighttime environment.
In order to solve the poor performance of real-time semantic segmentation of night road conditions in video images due to insufficient light and motion blur, this study proposes a scheme: a fuzzy information complementation strategy based on generative models and a network that fuses different intermediate layer outputs to complement spatial semantics which also embeds irregular convolutional attention modules for fine extraction of motion target boundaries. First, DeblurGan is used to generate information to fix the lost semantics in the original image; then, the outputs of different intermediate layers are taken out, assigned different weight scaling factors, and fused; finally, the irregular convolutional attention with the best effect is selected. The scheme achieves Global Accuracy of 89.1% Mean and IOU 94.2% on the night driving dataset of this experiment, which exceeds the best performance of DeepLabv3 by 1.3 and 7.2%, and achieves an Accuracy of 83.0% on the small volume label (Moveable). The experimental results demonstrate that the solution can effectively cope with various problems faced by night driving and enhance the model's perception. It also provides a technical reference for the semantic segmentation problem of vehicles driving in the nighttime environment.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.