The characterization of cancer genomes has provided insight into somatically altered genes across tumors, transformed our understanding of cancer biology, and enabled tailoring of therapeutic strategies. However, the function of most cancer alleles remains mysterious, and many cancer features transcend their genomes. Consequently, tumor genomic characterization does not influence therapy for most patients. Approaches to understand the function and circuitry of cancer genes provide complementary approaches to elucidate both oncogene and non-oncogene dependencies. Emerging work indicates that the diversity of therapeutic targets engendered by non-oncogene dependencies is much larger than the list of recurrently mutated genes. Here we describe a framework for this expanded list of cancer targets, providing novel opportunities for clinical translation.
Most convolutional neural network (CNN) models have various difficulties in identifying crop diseases owing to morphological and physiological changes in crop tissues, and cells. Furthermore, a single crop disease can show different symptoms. Usually, the differences in symptoms between early crop disease and late crop disease stages include the area of disease and color of disease. This also poses additional difficulties for CNN models. Here, we propose a lightweight CNN model called GrapeNet for the identification of different symptom stages for specific grape diseases. The main components of GrapeNet are residual blocks, residual feature fusion blocks (RFFBs), and convolution block attention modules. The residual blocks are used to deepen the network depth and extract rich features. To alleviate the CNN performance degradation associated with a large number of hidden layers, we designed an RFFB module based on the residual block. It fuses the average pooled feature map before the residual block input and the high-dimensional feature maps after the residual block output by a concatenation operation, thereby achieving feature fusion at different depths. In addition, the convolutional block attention module (CBAM) is introduced after each RFFB module to extract valid disease information. The obtained results show that the identification accuracy was determined as 82.99%, 84.01%, 82.74%, 84.77%, 80.96%, 82.74%, 80.96%, 83.76%, and 86.29% for GoogLeNet, Vgg16, ResNet34, DenseNet121, MobileNetV2, MobileNetV3_large, ShuffleNetV2_×1.0, EfficientNetV2_s, and GrapeNet. The GrapeNet model achieved the best classification performance when compared with other classical models. The total number of parameters of the GrapeNet model only included 2.15 million. Compared with DenseNet121, which has the highest accuracy among classical network models, the number of parameters of GrapeNet was reduced by 4.81 million, thereby reducing the training time of GrapeNet by about two times compared with that of DenseNet121. Moreover, the visualization results of Grad-cam indicate that the introduction of CBAM can emphasize disease information and suppress irrelevant information. The overall results suggest that the GrapeNet model is useful for the automatic identification of grape leaf diseases.
IntroductionTobacco brown spot disease caused by Alternaria fungal species is a major threat to tobacco growth and yield. Thus, accurate and rapid detection of tobacco brown spot disease is vital for disease prevention and chemical pesticide inputs.MethodsHere, we propose an improved YOLOX-Tiny network, named YOLO-Tobacco, for the detection of tobacco brown spot disease under open-field scenarios. Aiming to excavate valuable disease features and enhance the integration of different levels of features, thereby improving the ability to detect dense disease spots at different scales, we introduced hierarchical mixed-scale units (HMUs) in the neck network for information interaction and feature refinement between channels. Furthermore, in order to enhance the detection of small disease spots and the robustness of the network, we also introduced convolutional block attention modules (CBAMs) into the neck network.ResultsAs a result, the YOLO-Tobacco network achieved an average precision (AP) of 80.56% on the test set. The AP was 3.22%, 8.99%, and 12.03% higher than that obtained by the classic lightweight detection networks YOLOX-Tiny network, YOLOv5-S network, and YOLOv4-Tiny network, respectively. In addition, the YOLO-Tobacco network also had a fast detection speed of 69 frames per second (FPS).DiscussionTherefore, the YOLO-Tobacco network satisfies both the advantages of high detection accuracy and fast detection speed. It will likely have a positive impact on early monitoring, disease control, and quality assessment in diseased tobacco plants.
The identification of corn leaf diseases in a real field environment faces several difficulties, such as complex background disturbances, variations and irregularities in the lesion areas, and large intra-class and small inter-class disparities. Traditional Convolutional Neural Network (CNN) models have a low recognition accuracy and a large number of parameters. In this study, a lightweight corn disease identification model called DFCANet (Double Fusion block with Coordinate Attention Network) is proposed. The DFCANet consists mainly of two components: The dual feature fusion with coordinate attention and the Down-Sampling (DS) modules. The DFCA block contains dual feature fusion and Coordinate Attention (CA) modules. In order to completely fuse the shallow and deep features, these features were fused twice. The CA module suppresses the background noise and focuses on the diseased area. In addition, the DS module is used for down-sampling. It reduces the loss of information by expanding the feature channel dimension and the Depthwise convolution. The results show that DFCANet has an average recognition accuracy of 98.47%. It is more efficient at identifying corn leaf diseases in real scene images, compared with VGG16 (96.63%), ResNet50 (93.27%), EffcientNet-B0 (97.24%), ConvNeXt-B (94.18%), DenseNet121 (95.71%), MobileNet-V2 (95.41%), MobileNetv3-Large (96.33%), and ShuffleNetV2-1.0× (94.80%) methods. Moreover, the model’s Params and Flops are 1.91M and 309.1M, respectively, which are lower than heavyweight network models and most lightweight network models. In general, this study provides a novel, lightweight, and efficient convolutional neural network model for corn disease identification.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.