Lung nodules are vital indicators for the presence of lung cancer. An early detection enhances the survival rate of the patient by starting treatment at the right time. The detection and classification of malignancy in Computed Tomography (CT) images is a very time-consuming and difficult task for radiologists which lead the researchers to develop algorithms for Computer-Aided Diagnosis (CAD) systems to mitigate this burden. The performance of CAD systems is continuously improving by using various deep learning techniques for screening of lung cancer. In this paper, we proposed transferable texture Convolutional Neural Networks (CNN) to improve the classification performance of pulmonary nodules in CT scans. An Energy Layer (EL) is incorporated in our scheme, which extracts texture features from the convolutional layer. The inclusion of EL reduces the number of learnable parameters of the network, which further reduces the memory requirements and computational complexity. The proposed model has only three convolutional layers and one EL, instead of pooling layer. Overall proposed CNN architecture comprises of nine layers for automatic feature extraction and classification of pulmonary nodule candidates as malignant or benign. Furthermore, the pre-trained model of proposed CNN is also used to handle the smaller dataset classification problem by using transfer learning. This work has been evaluated on publicly available LIDC-IDRI and the LUNGx Challenge database through different evaluation matrices, such as; the accuracy, specificity, error rate and AUC. The proposed model is trained by six-fold cross-validation and achieved an accuracy score of 96.69% ± 0.72% with only 3.30% ± 0.72% error rate. Whereas, the measured AUC and recall is 99.11% ± 0.45% and 97.19% ± 0.57%, respectively. Moreover, we also tested our proposed technique on the MNIST dataset and achieved state-of-the-art results in terms of accuracy and error rate.
The existence of pulmonary nodules exhibits the presence of lung cancer. The Computer-Aided Diagnostic (CAD) and classification of such nodules in CT images lead to improve the lung cancer screening. The classic CAD systems utilize nodule detector and feature-based classifier. In this work, we proposed a decision level fusion technique to improve the performance of the CAD system for lung nodule classification. First, we evaluated the performance of Support Vector Machine (SVM) and AdaBoostM2 algorithms based on the deep features from the state-of-the-art transferable architectures (such as; VGG-16, VGG-19, GoogLeNet, Inception-V3, ResNet-18, ResNet-50, ResNet-101 and InceptionResNet-V2). Then, we analyzed the performance of SVM and AdaBoostM2 classifier as a function of deep features. We also extracted the deep features by identifying the optimal layers which improved the performance of the classifiers. The classification accuracy is increased from 76.88% to 86.28% for ResNet-101 and 67.37% to 83.40% for GoogLeNet. Similarly, the error rate is also reduced significantly. Moreover, the results showed that SVM is more robust and efficient for deep features as compared to AdaBoostM2. The results are based on 4-fold cross-validation and are presented for publicly available LUNGx challenge dataset. We showed that the proposed technique outperforms as compared to state-of-the-art techniques and achieved accuracy score was 90.46±0.25%. INDEX TERMS Computed tomography, computer aided diagnosis, support vector machine, AdaBoostM2, Biomedical image processing, lung nodule, deep convolutional neural network, deep features, LUNGx challenge.
Wearable electronics capable of recording and transmitting biosignals can provide convenient and pervasive health monitoring. A typical EEG recording produces large amount of data. Conventional compression methods cannot compress date below Nyquist rate, thus resulting in large amount of data even after compression. This needs large storage and hence long transmission time. Compressed sensing has proposed solution to this problem and given a way to compress data below Nyquist rate. In this paper, double temporal sparsity based reconstruction algorithm has been applied for the recovery of compressively sampled EEG data. The results are further improved by modifying the double temporal sparsity based reconstruction algorithm using schattern-p norm along with decorrelation transformation of EEG data before processing. The proposed modified double temporal sparsity based reconstruction algorithm out-perform block sparse bayesian learning and Rackness based compressed sensing algorithms in terms of SNDR and NMSE. Simulation results further show that the proposed algorithm has better convergence rate and less execution time.
Compressive sensing (CS) offers compression of data below the Nyquist rate, making it an attractive solution in the field of medical imaging, and has been extensively used for ultrasound (US) compression and sparse recovery. In practice, CS offers a reduction in data sensing, transmission, and storage. Compressive sensing relies on the sparsity of data; i.e., data should be sparse in original or in some transformed domain. A look at the literature reveals that rich variety of algorithms have been suggested to recover data using compressive sensing from far fewer samples accurately, but with tradeoffs for efficiency. This paper reviews a number of significant CS algorithms used to recover US images from the undersampled data along with the discussion of CS in 3D US images. In this paper, sparse recovery algorithms applied to US are classified in five groups. Algorithms in each group are discussed and summarized based on their unique technique, compression ratio, sparsifying transform, 3D ultrasound, and deep learning. Research gaps and future directions are also discussed in the conclusion of this paper. This study is aimed to be beneficial for young researchers intending to work in the area of CS and its applications, specifically to US.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.