The accurate localization and classification of lung abnormalities from radiological images are important for clinical diagnosis and treatment strategies. However, multilabel classification, wherein medical images are interpreted to point out multiple existing or suspected pathologies, presents practical constraints. Building a highly precise classification model typically requires a huge number of images manually annotated with labels and finding masks that are expensive to acquire in practice. To address this intrinsically weakly supervised learning problem, we present the integration of different features extracted from shallow handcrafted techniques and a pretrained deep CNN model. The model consists of two main approaches: a localization approach that concentrates adaptively on the pathologically abnormal regions utilizing pretrained DenseNet-121 and a classification approach that integrates four types of local and deep features extracted respectively from SIFT, GIST, LBP, and HOG, and convolutional CNN features. We demonstrate that our approaches efficiently leverage interdependencies among target annotations and establish the state of the art classification results of 14 thoracic diseases in comparison with current reference baselines on the publicly available ChestX-ray14 dataset.
Functional near-infrared spectroscopy (fNIRS), known as a non-invasive optical neuroimaging technique, is currently used to assess brain dynamics during the performance of complex works and everyday tasks. However, the deep learning approaches to distinguish stress levels based on the changes in hemoglobin concentrations have not yet been extensively investigated. In this paper, we evaluated the efficiencies of advanced methods differentiating the rest and task periods during Stroop task experiments. We first explored that the apparent changes of oxy-hemoglobin and deoxy-hemoglobin concentrations associated with two mental stages did exist across each participant. The preprocessing steps, such as converting raw signals into hemoglobin values and filtering to remove various noises involved in fNIRS signals, were performed to obtain the clean dataset, called non-PCA inputs. Next, we applied the principal component analysis (PCA) algorithm to get PCA-inputs before putting non-PCA inputs into our four classifiers. Then, a novel deep learning-based discrimination framework was studied. The conventional machine learning algorithms, including SVM and AdaBoost, produced the best accuracies of 64.74% ± 1.57% and 71.13% ± 2.96%, respectively. In comparison, the deep learning approaches, including deep belief network and convolutional neural network models, have enabled better classification accuracies of 84.26% ± 2.58% and 72.77% ± 1.92%, respectively. INDEX TERMS Artificial intelligence, deep learning, adaptive boosting, convolutional neural networks, deep belief networks, support vector machines, functional near infrared spectroscopy.
The timely diagnosis of Alzheimer’s disease (AD) and its prodromal stages is critically important for the patients, who manifest different neurodegenerative severity and progression risks, to take intervention and early symptomatic treatments before the brain damage is shaped. As one of the promising techniques, functional near-infrared spectroscopy (fNIRS) has been widely employed to support early-stage AD diagnosis. This study aims to validate the capability of fNIRS coupled with Deep Learning (DL) models for AD multi-class classification. First, a comprehensive experimental design, including the resting, cognitive, memory, and verbal tasks was conducted. Second, to precisely evaluate the AD progression, we thoroughly examined the change of hemodynamic responses measured in the prefrontal cortex among four subject groups and among genders. Then, we adopted a set of DL architectures on an extremely imbalanced fNIRS dataset. The results indicated that the statistical difference between subject groups did exist during memory and verbal tasks. This presented the correlation of the level of hemoglobin activation and the degree of AD severity. There was also a gender effect on the hemoglobin changes due to the functional stimulation in our study. Moreover, we demonstrated the potential of distinguished DL models, which boosted the multi-class classification performance. The highest accuracy was achieved by Convolutional Neural Network-Long Short-Term Memory (CNN-LSTM) using the original dataset of three hemoglobin types (0.909 ± 0.012 on average). Compared to conventional machine learning algorithms, DL models produced a better classification performance. These findings demonstrated the capability of DL frameworks on the imbalanced class distribution analysis and validated the great potential of fNIRS-based approaches to be further contributed to the development of AD diagnosis systems.
Automatic screening and diagnosis of lung abnormalities from chest X-ray images has been recently drawing attention from the computer vision and medical imaging communities. Previous studies of deep neural networks have predominantly demonstrated the effectiveness of lung disease binary classification procedures. However, large numbers of medical images-which can be labeled with a variety of existing or suspected pathologies-are required to be interpreted and reported upon daily by an individual radiologist; this poses a challenge in maintaining a consistently high diagnosis accuracy. In this paper, we present a competitive study of knowledge distillation (KD) in deep learning for classification of abnormalities in chest X-ray images. This method aims to either distill knowledge from cumbersome teacher models into lightweight student models or to self-train these student models, to generate weakly supervised multi-label lung disease classifications. Our approach was based on multi-task deep learning architectures that, in addition to multi-class classification, supported the visualizations utilized in saliency maps of the pathological regions where an abnormality was located. A self-training KD framework, in which the model learned from itself, was shown to outperform both the well-established baseline training procedure and the normal KD, achieving the AUC improvements of up to 6.39% and 3.89%, respectively. Through application to the publicly available ChestX-ray14 dataset, we demonstrated that our approach efficiently overcame the interdependency of 14 weakly annotated thorax diseases and facilitated the state-of-the-art classification compared with the current deep learning baselines.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.