Statistical machine learning has developed into integral components of contemporary scientific methodology. This integration provides automated procedures for predicting phenomena, case diagnosis, or object identification based on previous observations, uncovering patterns underlying data, and providing insights into the problem. Identification of corn plant diseases and pests using it has become popular recently. Corn (Zea mays L) is one of the essential carbohydrate-producing foodstuffs besides wheat and rice. Corn plants are sensitive to pests and diseases, resulting in a decrease in the quantity and quality of the production. Eradicate pests and diseases according to their type is a solution to overcome the problem of disease in corn plants. This research aims to identify corn plant diseases and pests based on the digital image using the Multinomial Naïve Bayes and K-Nearest Neighbor methods. The data used consisted of 761 digital images with six classes of corn plants disease and pest. The investigation shows that the K-Nearest Neighbor method has a better predictive performance than the Multinomial Naïve Bayes (MNB) method. The MNB method with two categories has an accuracy level of 92.72%, a precision level of 79.88%, a recall level of 79.24%, F1-score 78.17%, kappa 72.44%, and AUC 71.91%. Simultaneously, the K-Nearest Neighbor approach with k=3 has an accuracy of 99.54 %, a precision of 88.57%, recall 94.38%, F1-score 93.59%, kappa 94.30%, and AUC 95.45%.
The retina is the most important part of the eye. By proper feature extraction, it can be the first step to detect a disease. Morphology of retina blood vessels can be used to identify and classify a disease. A step, such as segmentation and analysis of retinal blood vessels, can assist medical personnel in detecting the severity of a disease. In this paper, vascular segmentation using U-net architecture in the Convolutional Neural Network (CNN) method is proposed to train a sematic segmentation model in retinal blood vessel. In addition, the Contrast Limited Adaptive Histogram Equalization (CLAHE) method is used to increase the contrast of the grayscale and Median Filter is used to obtain better image quality. Data augmentation is also used to maximize the number of datasets owned to make more. The proposed method allows for easier implementation. In this study, the dataset used was STARE with the result of accuracy, sensitivity, specificity, precision, and F1-score that reached 97.64%, 78.18%, 99.20%, 88.77%, and 82.91%.
Lungs are one of the most important parts of the human body. They are very susceptible to various disorders and diseases. For this reason, it is necessary to detect or diagnose the lungs. In this study, we present a method for lung segmentation using the CNN method U-Net architecture. The initial stage was pre-processed did a 1-1 correspondence to equalize the amount of training data and testing data and resized the image so all images have the same size. The process continued with the CLAHE (Contrast Limited Adaptive Histogram Equalization), and after that, the segmentation process was carried out according to the method. This study used a dataset from the Kaggle website. The results used the CNN method of the U-Net architecture in data get an average accuracy of 91.68%, sensitivity 92.80%, and specificity 89.15%, precision 95.07, and F1-Score 93. 92%. Based on the performance evaluation results, it was concluded that the method proposed in the study is great and valid in the lungs segmentation on X-Ray Thorax images.
The retinal blood vessels in humans are major components with different shapes and sizes. The extraction of the blood vessels from the retina is an important step to identify the type or nature of the pattern of the diseases in the retina. Furthermore, the retinal blood vessel was also used for diagnosis, detection, and classification. The most recent solution in this topic is to enable retinal image improvement or enhancement by a convolution filter and Sauvola threshold. In image enhancement, gamma correction is applied before filtering the retinal fundus. After that, the image should be transformed to a gray channel to enhance pictorial clarity using contrast-limited histogram equalization. For filter, this paper combines two convolution filters, namely sharpen and smooth filters. The Sauvola threshold, the morphology, and the medium filter are applied to extract blood vessels from the retinal image. This paper uses DRIVE and STARE datasets. The accuracies of the proposed method are 95.37% for DRIVE with a runtime of 1.77[Formula: see text]s and 95.17% for STARE with 2.05[Formula: see text]s runtime. Based on the result, it concludes that the proposed method is good enough to achieve average calculation parameters of a low time quality, quick, and significant.
Additional layers to the U-Net architecture leads to additional parameters and network complexity. The Visual Geometry Group (VGG) architecture with 16 backbones can overcome the problem with small convolutions. Dense Connected (DenseNet) can be used to avoid excessive feature learning in VGG by directly connecting each layer using input from the previous feature map. Adding a Dropout layer can protect DenseNet from Overfitting problems. This study proposes a VG-DropDNet architecture that combines VGG, DenseNet, and U-Net with a dropout layer in blood vessels retinal segmentation. VG-DropDNet is applied to Digital Retina Image for Vessel Extraction (DRIVE) and Retina Structured Analysis (STARE) datasets. The results on DRIVE give great accuracy of 95.36%, sensitivity of 79.74% and specificity of 97.61%. The F1-score on DRIVE of 0.8144 indicates that VG-DropDNet has great precision and recall. The IoU result is 68.70. It concludes that the resulting image of VG-DropDNet has a great resemblance to its ground truth. The results on STARE are excellent for accuracy of 98.56%, sensitivity of 91.24%, specificity of 92.99% and IoU of 86.90%. The results of the VGG-DropDNet on STARE show that the proposed method is excellent and robust for blood vessels retinal segmentation. The Cohen's Kappa coefficient obtained by VG-DropDNet at DRIVe is 0.8386 and at STARE is 0.98, it explains that the VG-DropDNet results are consistent and precise in both datasets. The results on various datasets indicate that VG-DropDnet is effective, robust and stable in retinal image blood vessel segmentation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.