This study introduces significant improvements in the construction of Deep Convolutional Neural Network (DCNN) models for classifying agricultural products, specifically oranges, based on their shape, size, and color. Utilizing the MobileNetV2 architecture, this research leverages its efficiency and lightweight nature, making it suitable for mobile and embedded applications. Key techniques such as Depthwise Separable Convolutions, Linear Bottlenecks, and Inverted Residuals help reduce the number of parameters and computational load while maintaining high performance in feature extraction. Additionally, the study employs comprehensive data augmentation methods, including horizontal and vertical flips, grayscale transformations, hue adjustments, brightness adjustments, and noise addition to enhance the model's robustness and generalization capabilities. The proposed model demonstrates superior performance, achieving an overall accuracy of 100% with nearly perfect precision, recall, and F1-score for both "orange_good" and "orange_bad" classes, significantly outperforming previous models which typically achieved accuracies between 70–90%. The confusion matrix shows that the model has high sensitivity and specificity, with very few misclassifications. Finally, this study empresentasizes the practical applicability of the proposed model, particularly its easy deployment on resource-constrained devices and its effectiveness in agricultural product quality control processes. These findings affirm the model in this research as a reliable and highly efficient tool for agricultural product classification, surpassing the capabilities of traditional models in this field.
This study introduces significant improvements in the construction of Deep Convolutional Neural Network (DCNN) models for classifying agricultural products, specifically oranges, based on their shape, size, and color. Utilizing the MobileNetV2 architecture, this research leverages its efficiency and lightweight nature, making it suitable for mobile and embedded applications. Key techniques such as Depthwise Separable Convolutions, Linear Bottlenecks, and Inverted Residuals help reduce the number of parameters and computational load while maintaining high performance in feature extraction. Additionally, the study employs comprehensive data augmentation methods, including horizontal and vertical flips, grayscale transformations, hue adjustments, brightness adjustments, and noise addition to enhance the model's robustness and generalization capabilities. The proposed model demonstrates superior performance, achieving an overall accuracy of 100% with nearly perfect precision, recall, and F1-score for both "orange_good" and "orange_bad" classes, significantly outperforming previous models which typically achieved accuracies between 70–90%. The confusion matrix shows that the model has high sensitivity and specificity, with very few misclassifications. Finally, this study empresentasizes the practical applicability of the proposed model, particularly its easy deployment on resource-constrained devices and its effectiveness in agricultural product quality control processes. These findings affirm the model in this research as a reliable and highly efficient tool for agricultural product classification, surpassing the capabilities of traditional models in this field.
Detecting brain tumors is crucial in medical diagnostics due to the serious health risks these abnormalities present to patients. Deep learning approaches can significantly improve localization in various medical issues, particularly brain tumors. This paper emphasizes the use of deep learning models to segment brain tumors using a large dataset. The study involves comparing modifications to U-Net structures, including kernel size, number of channels, dropout ratio, and changing the activation function from ReLU to Leaky ReLU. Optimizing these parameters has notably enhanced brain tumor segmentation in MR images, achieving a Global Accuracy of 99.4% and a dice similarity coefficient of 90.2%. The model was trained, validated, and tested on many magnetic resonance images, with a training time not exceeding 19 min on a powerful GPU. This approach can be extended in medical care and hospitals to assist radiologists in identifying tumor locations and suspicious regions, thereby improving diagnosis and treatment effectiveness. The software could also be integrated into MR equipment protocols.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.