Traditional screening of cervical cancer type classification majorly depends on the pathologist’s experience, which also has less accuracy. Colposcopy is a critical component of cervical cancer prevention. In conjunction with precancer screening and treatment, colposcopy has played an essential role in lowering the incidence and mortality from cervical cancer over the last 50 years. However, due to the increase in workload, vision screening causes misdiagnosis and low diagnostic efficiency. Medical image processing using the convolutional neural network (CNN) model shows its superiority for the classification of cervical cancer type in the field of deep learning. This paper proposes two deep learning CNN architectures to detect cervical cancer using the colposcopy images; one is the VGG19 (TL) model, and the other is CYENET. In the CNN architecture, VGG19 is adopted as a transfer learning for the studies. A new model is developed and termed as the Colposcopy Ensemble Network (CYENET) to classify cervical cancers from colposcopy images automatically. The accuracy, specificity, and sensitivity are estimated for the developed model. The classification accuracy for VGG19 was 73.3%. Relatively satisfied results are obtained for VGG19 (TL). From the kappa score of the VGG19 model, we can interpret that it comes under the category of moderate classification. The experimental results show that the proposed CYENET exhibited high sensitivity, specificity, and kappa scores of 92.4%, 96.2%, and 88%, respectively. The classification accuracy of the CYENET model is improved as 92.3%, which is 19% higher than the VGG19 (TL) model.
Alzheimer's Disease (AD) is the most common cause of dementia globally. It steadily worsens from mild to severe, impairing one's ability to complete any work without assistance. It begins to outstrip due to the population ages and diagnosis timeline. For classify cases, existing approaches incorporate medical history, neuropsychological testing, and Magnetic Resonance Imaging (MRI), but efficient procedures remain inconstant due to lack of sensitivity and precision. The Convolutional Neural Network (CNN) utilized to create a framework that can exploit to detect specific Alzheimer's disease characteristics from MRI images. By considering four stages of dementia and conducting a particular diagnosis, the proposed model generates high-resolution disease probability maps from the local brain structure to a multilayer perceptron and provides accurate, intuitive visualizations of individual Alzheimer's disease risk. To avoid the problem of class imbalance, the samples should be evenly distributed among the classes. The obtained MRI image dataset from Kaggle has a major class imbalance problem. A DEMentia NETwork (DEMNET) a CNN model is proposed to detect the dementia stages, which is the primary cause of AD. The DEMNET achieves an accuracy of 95.23%, Area Under Curve (AUC) of 97% and Cohen's Kappa value of 0.93 from the Kaggle dataset, which is superior to existing methods. We also used the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset to predict AD classes in order to assess the efficacy of the proposed model.
The proposed method introduces algorithms for the preprocessing of normal, COVID-19, and pneumonia X-ray lung images which promote the accuracy of classification when compared with raw (unprocessed) X-ray lung images. Preprocessing of an image improves the quality of an image increasing the intersection over union scores in segmentation of lungs from the X-ray images. The authors have implemented an efficient preprocessing and classification technique for respiratory disease detection. In this proposed method, the histogram of oriented gradients (HOG) algorithm, Haar transform (Haar), and local binary pattern (LBP) algorithm were applied on lung X-ray images to extract the best features and segment the left lung and right lung. The segmentation of lungs from the X-ray can improve the accuracy of results in COVID-19 detection algorithms or any machine/deep learning techniques. The segmented lungs are validated over intersection over union scores to compare the algorithms. The preprocessed X-ray image results in better accuracy in classification for all three classes (normal/COVID-19/pneumonia) than unprocessed raw images. VGGNet, AlexNet, Resnet, and the proposed deep neural network were implemented for the classification of respiratory diseases. Among these architectures, the proposed deep neural network outperformed the other models with better classification accuracy.
This paper proposes a smart algorithm for image processing by means of recognition of text, extraction of information and vocalization for the visually challenged. The system uses LattePanda Alpha system on board that processes the scanned images. The image is categorized into its equivalent alphanumeric characters following pre-processing, segmentation, extraction of features and post-processing of the scanned or image based information. Further, a text to speech synthesizer is used for vocalization processed content. In converting handwritten scripts, the system offers an accuracy of 97% in conversion. This also depends on the legibility of the data. The time delay for the entire conversion process is also analysed and the efficiency of the system is estimated.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.