Chest diseases can be dangerous and deadly. They include many chest infections such as pneumonia, asthma, edema, and, lately, COVID-19. COVID-19 has many similar symptoms compared to pneumonia, such as breathing hardness and chest burden. However, it is a challenging task to differentiate COVID-19 from other chest diseases. Several related studies proposed a computer-aided COVID-19 detection system for the single-class COVID-19 detection, which may be misleading due to similar symptoms of other chest diseases. This paper proposes a framework for the detection of 15 types of chest diseases, including the COVID-19 disease, via a chest X-ray modality. Two-way classification is performed in proposed Framework. First, a deep learning-based convolutional neural network (CNN) architecture with a soft-max classifier is proposed. Second, transfer learning is applied using fully-connected layer of proposed CNN that extracted deep features. The deep features are fed to the classical Machine Learning (ML) classification methods. However, the proposed framework improves the accuracy for COVID-19 detection and increases the predictability rates for other chest diseases. The experimental results show that the proposed framework, when compared to other state-of-the-art models for diagnosing COVID-19 and other chest diseases, is more robust, and the results are promising.
In the recent era, various diseases have severely affected the lifestyle of individuals, especially adults. Among these, bone diseases, including Knee Osteoarthritis (KOA), have a great impact on quality of life. KOA is a knee joint problem mainly produced due to decreased Articular Cartilage between femur and tibia bones, producing severe joint pain, effusion, joint movement constraints and gait anomalies. To address these issues, this study presents a novel KOA detection at early stages using deep learning-based feature extraction and classification. Firstly, the input X-ray images are preprocessed, and then the Region of Interest (ROI) is extracted through segmentation. Secondly, features are extracted from preprocessed X-ray images containing knee joint space width using hybrid feature descriptors such as Convolutional Neural Network (CNN) through Local Binary Patterns (LBP) and CNN using Histogram of oriented gradient (HOG). Low-level features are computed by HOG, while texture features are computed employing the LBP descriptor. Lastly, multi-class classifiers, that is, Support Vector Machine (SVM), Random Forest (RF), and K-Nearest Neighbour (KNN), are used for the classification of KOA according to the Kellgren–Lawrence (KL) system. The Kellgren–Lawrence system consists of Grade I, Grade II, Grade III, and Grade IV. Experimental evaluation is performed on various combinations of the proposed framework. The experimental results show that the HOG features descriptor provides approximately 97% accuracy for the early detection and classification of KOA for all four grades of KL.
Due to the rapid growth in artificial intelligence (AI) and deep learning (DL) approaches, the security and robustness of the deployed algorithms need to be guaranteed. The security susceptibility of the DL algorithms to adversarial examples has been widely acknowledged. The artificially created examples will lead to different instances negatively identified by the DL models that are humanly considered benign. Practical application in actual physical scenarios with adversarial threats shows their features. Thus, adversarial attacks and defense, including machine learning and its reliability, have drawn growing interest and, in recent years, has been a hot topic of research. We introduce a framework that provides a defensive model against the adversarial speckle-noise attack, the adversarial training, and a feature fusion strategy, which preserves the classification with correct labelling. We evaluate and analyze the adversarial attacks and defenses on the retinal fundus images for the Diabetic Retinopathy recognition problem, which is considered a state-of-the-art endeavor. Results obtained on the retinal fundus images, which are prone to adversarial attacks, are 99% accurate and prove that the proposed defensive model is robust.
Glaucoma is one of the eye diseases stimulated by the fluid pressure that increases in the eyes, damaging the optic nerves and causing partial or complete vision loss. As Glaucoma appears in later stages and it is a slow disease, detailed screening and detection of the retinal images is required to avoid vision forfeiture. This study aims to detect glaucoma at early stages with the help of deep learning-based feature extraction. Retinal fundus images are utilized for the training and testing of our proposed model. In the first step, images are pre-processed, before the region of interest (ROI) is extracted employing segmentation. Then, features of the optic disc (OD) are extracted from the images containing optic cup (OC) utilizing the hybrid features descriptors, i.e., convolutional neural network (CNN), local binary patterns (LBP), histogram of oriented gradients (HOG), and speeded up robust features (SURF). Moreover, low-level features are extracted using HOG, whereas texture features are extracted using the LBP and SURF descriptors. Furthermore, high-level features are computed using CNN. Additionally, we have employed a feature selection and ranking technique, i.e., the MR-MR method, to select the most representative features. In the end, multi-class classifiers, i.e., support vector machine (SVM), random forest (RF), and K-nearest neighbor (KNN), are employed for the classification of fundus images as healthy or diseased. To assess the performance of the proposed system, various experiments have been performed using combinations of the aforementioned algorithms that show the proposed model based on the RF algorithm with HOG, CNN, LBP, and SURF feature descriptors, providing <=99% accuracy on benchmark datasets and 98.8% on k-fold cross-validation for the early detection of glaucoma.
Lung cancer is a deadly disease if not diagnosed in its early stages. However, early detection of lung cancer is a challenging task due to the shape and size of its nodules. Radiologists need support from automated tools for precise opinion. Automated detection of the affected lungs nodule is difficult because of the shape similarity among healthy tissues. Over the years, several expert systems have been developed that help radiologists to diagnose lung cancer. In this article, we propose a framework to precisely detect lungs cancer by classifying it between benign and malignant nodules. The framework is tested using the subset of the publicly available at the Lung Image Database Consortium image collection (LIDC-IDRI). Multiple techniques including filtering and noise removing are applied for pre-processing. Subsequently, the OTSU and the semantic segmentation are used to accurately detect the unhealthy lungs nodules. In total, 13 nodules features were extracted using Principal Components Analysis (PCA) algorithm. Four optimal features are selected based on the classification performance. In the classification phase, 9 different classifiers are used along with two types of validation schemes i.e. train test holdout validation with 70-30 data split and 10 fold cross-validation. Our experiments show that the proposed system provides 99.23\% accuracy using logic boost classifier.
The COVID-19 pandemic created a global emergency in many sectors. The spread of the disease can be subdued through timely vaccination. The COVID-19 vaccination process in various countries is ongoing and is slowing down due to multiple factors. Many studies on European countries and the USA have been conducted and have highlighted the public’s concern that over-vaccination results in slowing the vaccination rate. Similarly, we analyzed a collection of data from the gulf countries’ citizens’ COVID-19 vaccine-related discourse shared on social media websites, mainly via Twitter. The people’s feedback regarding different types of vaccines needs to be considered to increase the vaccination process. In this paper, the concerns of Gulf countries’ people are highlighted to lessen the vaccine hesitancy. The proposed approach emphasizes the Gulf region-specific concerns related to COVID-19 vaccination accurately using machine learning (ML)-based methods. The collected data were filtered and tokenized to analyze the sentiments extracted using three different methods: Ratio, TextBlob, and VADER methods. The sentiment-scored data were classified into positive and negative tweeted data using a proposed LSTM method. Subsequently, to obtain more confidence in classification, the in-depth features from the proposed LSTM were extracted and given to four different ML classifiers. The ratio, TextBlob, and VADER sentiment scores were separately provided to LSTM and four machine learning classifiers. The VADER sentiment scores had the best classification results using fine-KNN and Ensemble boost with 94.01% classification accuracy. Given the improved accuracy, the proposed scheme is robust and confident in classifying and determining sentiments in Twitter discourse.
Skin diseases effectively influence all parts of life. Early and accurate detection of skin cancer is necessary to avoid significant loss. The manual detection of skin diseases by dermatologists leads to misclassification due to the same intensity and color levels. Therefore, an automated system to identify these skin diseases is required. Few studies on skin disease classification using different techniques have been found. However, previous techniques failed to identify multi-class skin disease images due to their similar appearance. In the proposed study, a computer-aided framework for automatic skin disease detection is presented. In the proposed research, we collected and normalized the datasets from two databases (ISIC archive, Mendeley) based on six Basal Cell Carcinoma (BCC), Actinic Keratosis (AK), Seborrheic Keratosis (SK), Nevus (N), Squamous Cell Carcinoma (SCC), and Melanoma (M) common skin diseases. Besides, segmentation is performed using deep Convolutional Neural Networks (CNN). Furthermore, three types of features are extracted from segmented skin lesions: ABCD rule, GLCM, and in-depth features. AlexNet transfer learning is used for deep feature extraction, while a support vector machine (SVM) is used for classification. Experimental results show that SVM outperformed other studies in terms of accuracy, as AK disease achieved 100% accuracy, BCC 92.7%, M 95.1%, N 97.8%, SK 93.1%, SCC 91.4% with a global accuracy of 95.4%.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.