After the birth of deep learning, artificial intelligence has entered a vigorous period of rapid development. In this process of rising and growing, we have made one achievement after another. When deep learning is applied to fruit target detection, due to the complex recognition background, large similarity between models, serious texture interference, and partial occlusion of fruits, the fruit target detection rate based on traditional methods is low. In order to solve these problems, a BCo-YOLOv5 network model is proposed to recognize and detect fruit targets in orchards. We use YOLOv5s as the basic model for feature image extraction and target detection. This paper introduces BCAM (bidirectional cross attention mechanism) into the network and adds BCAM between the backbone network and the neck network of the YOLOv5s basic model. BCAM uses weight multiplication strategy and maximum weight strategy to build a deeper position feature relationship, which can better assist the network in detecting fruit targets in fruit images. After training and testing the network, the map BCo-YOLOv5 network model reaches 97.70%. In order to verify the detection ability of the BCo-YOLOv5 network to citrus, apple, grape, and other fruit targets, we conducted a large number of experiments BCo-YOLOv5 network. The experimental results of the BCo-YOLOv5 network show that this method can effectively detect citrus, apple, and grape targets in fruit images, and the fruit target detection method based on BCo-YOLOv5 network is better than most orchard fruit detection methods.
Tomato is an important and fragile crop. During the course of its development, it is frequently contaminated with bacteria or viruses. Tomato leaf diseases may be detected quickly and accurately, resulting in increased productivity and quality. Because of the intricate development environment of tomatoes and their inconspicuous disease spot features and small spot area, present machine vision approaches fail to reliably recognize tomato leaves. As a result, this research proposes a novel paradigm for detecting tomato leaf disease. The INLM (integration nonlocal means) filtering algorithm, for example, decreases the interference of surrounding noise on the features. Then, utilizing ResNeXt50 as the backbone, we create DCCAM-MRNet, a novel tomato image recognition network. Dilated Convolution (DC) was employed in STAGE 1 of the DCCAM-MRNet to extend the network's perceptual area and locate the scattered disease spots on tomato leaves. The coordinate attention (CA) mechanism is then introduced to record cross-channel information and direction- and position-sensitive data, allowing the network to more accurately detect localized tomato disease spots. Finally, we offer a mixed residual connection (MRC) technique that combines residual block (RS-Block) and transformed residual block (TR-Block) (TRS-Block). This strategy can increase the network's accuracy while also reducing its size. The DCCAM-classification MRNet's accuracy is 94.3 percent, which is higher than the existing network, and the number of parameters is 0.11 M lesser than the backbone network ResNeXt50, according to the experimental results. As a result, combining INLM and DCCAM-MRNet to identify tomato diseases is a successful strategy.
The target detection of smoke through remote sensing images obtained by means of unmanned aerial vehicles (UAVs) can be effective for monitoring early forest fires. However, smoke targets in UAV images are often small and difficult to detect accurately. In this paper, we use YOLOX-L as a baseline and propose a forest smoke detection network based on the parallel spatial domain attention mechanism and a small-scale transformer feature pyramid network (PDAM–STPNNet). First, to enhance the proportion of small forest fire smoke targets in the dataset, we use component stitching data enhancement to generate small forest fire smoke target images in a scaled collage. Then, to fully extract the texture features of smoke, we propose a parallel spatial domain attention mechanism (PDAM) to consider the local and global textures of smoke with symmetry. Finally, we propose a small-scale transformer feature pyramid network (STPN), which uses the transformer encoder to replace all CSP_2 blocks in turn on top of YOLOX-L’s FPN, effectively improving the model’s ability to extract small-target smoke. We validated the effectiveness of our model with recourse to a home-made dataset, the Wildfire Observers and Smoke Recognition Homepage, and the Bowfire dataset. The experiments show that our method has a better detection capability than previous methods.
Grape disease is a significant contributory factor to the decline in grape yield, typically affecting the leaves first. Efficient identification of grape leaf diseases remains a critical unmet need. To mitigate background interference in grape leaf feature extraction and improve the ability to extract small disease spots, by combining the characteristic features of grape leaf diseases, we developed a novel method for disease recognition and classification in this study. First, Gaussian filters Sobel smooth de-noising Laplace operator (GSSL) was employed to reduce image noise and enhance the texture of grape leaves. A novel network designated coordinated attention shuffle mechanism-asymmetric multi-scale fusion module net (CASM-AMFMNet) was subsequently applied for grape leaf disease identification. CoAtNet was employed as the network backbone to improve model learning and generalization capabilities, which alleviated the problem of gradient explosion to a certain extent. The CASM-AMFMNet was further utilized to capture and target grape leaf disease areas, therefore reducing background interference. Finally, Asymmetric multi-scale fusion module (AMFM) was employed to extract multi-scale features from small disease spots on grape leaves for accurate identification of small target diseases. The experimental results based on our self-made grape leaf image dataset showed that, compared to existing methods, CASM-AMFMNet achieved an accuracy of 95.95%, F1 score of 95.78%, and mAP of 90.27%. Overall, the model and methods proposed in this report could successfully identify different diseases of grape leaves and provide a feasible scheme for deep learning to correctly recognize grape diseases during agricultural production that may be used as a reference for other crops diseases.
In deep learning-based maize leaf disease detection, a maize disease identification method called Network based on wavelet threshold-guided bilateral filtering, multi-channel ResNet, and attenuation factor (WG-MARNet) is proposed. This method can solve the problems of noise, background interference, and low detection accuracy of maize leaf disease images. To begin, a processing layer called Wavelet threshold guided bilateral filtering (WT-GBF) based on the WG-MARNet model is employed to reduce image noise and perform high and low-frequency decomposition of the input image using WT-GBF. This increases the input image’s resistance to environmental interference and feature extraction capability. Secondly, for the multiscale feature fusion technique, an average down-sampling and tiling method is employed to increase feature representation and limit the risk of overfitting. Then, on high and low-frequency multi-channel, an attenuation factor is introduced to optimize the performance instability during training of the deep network. Finally, when the convergence and accuracy are compared, PRelu and Adabound are used instead of the Relu activation function and the Adam optimizer. The experimental results revealed that our method’s average recognition accuracy was 97.96%, and the detection time for a single image was 0.278 seconds. The average detection accuracy has been increased. The method lays the groundwork for the precise control of maize diseases in the field.
The appearance quality of apples directly affects their price. To realize apple grading automatically, it is necessary to find an effective method for detecting apple surface defects. Aiming at the problem of a low recognition rate in apple surface defect detection under small sample conditions, we designed an apple surface defect detection network (ASDINet) suitable for small sample learning. The self-developed apple sorting system collected RGB images of 50 apple samples for model verification, including non-defective and defective apples (rot, disease, lacerations, and mechanical damage). First, a segmentation network (AU-Net) with a stronger ability to capture small details was designed, and a Dep-conv module that could expand the feature capacity of the receptive field was inserted in its down-sampling path. Among them, the number of convolutional layers in the single-layer convolutional module was positively correlated with the network depth. Next, to achieve real-time segmentation, we replaced the flooding of feature maps with mask output in the 13th layer of the network. Finally, we designed a global decision module (GDM) with global properties, which inserted the global spatial domain attention mechanism (GSAM) and performed fast prediction on abnormal images through the input of masks. In the comparison experiment with state-of-the-art models, our network achieved an AP of 98.8%, and a 97.75% F1-score, which were higher than those of most of the state-of-the-art networks; the detection speed reached 39ms per frame, achieving accuracy-easy deployment and substantial trade-offs that are in line with actual production needs. In the data sensitivity experiment, the ASDINet achieved results that met the production needs under the training of 42 defective pictures. In addition, we also discussed the effect of the ASDINet in actual production, and the test results showed that our proposed network demonstrated excellent performance consistent with the theory in actual production.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.