Automated medical image analysis is an emerging field of research that identifies the disease with the help of imaging technology. Diabetic retinopathy (DR) is a retinal disease that is diagnosed in diabetic patients. Deep neural network (DNN) is widely used to classify diabetic retinopathy from fundus images collected from suspected persons. The proposed DR classification system achieves a symmetrically optimized solution through the combination of a Gaussian mixture model (GMM), visual geometry group network (VGGNet), singular value decomposition (SVD) and principle component analysis (PCA), and softmax, for region segmentation, high dimensional feature extraction, feature selection and fundus image classification, respectively. The experiments were performed using a standard KAGGLE dataset containing 35,126 images. The proposed VGG-19 DNN based DR model outperformed the AlexNet and spatial invariant feature transform (SIFT) in terms of classification accuracy and computational time. Utilization of PCA and SVD feature selection with fully connected (FC) layers demonstrated the classification accuracies of 92.21%, 98.34%, 97.96%, and 98.13% for FC7-PCA, FC7-SVD, FC8-PCA, and FC8-SVD, respectively.
Lung cancer is one of the major causes of cancer-related deaths due to its aggressive nature and delayed detections at advanced stages. Early detection of lung cancer is very important for the survival of an individual, and is a significant challenging problem. Generally, chest radiographs (X-ray) and computed tomography (CT) scans are used initially for the diagnosis of the malignant nodules; however, the possible existence of benign nodules leads to erroneous decisions. At early stages, the benign and the malignant nodules show very close resemblance to each other. In this paper, a novel deep learning-based model with multiple strategies is proposed for the precise diagnosis of the malignant nodules. Due to the recent achievements of deep convolutional neural networks (CNN) in image analysis, we have used two deep three-dimensional (3D) customized mixed link network (CMixNet) architectures for lung nodule detection and classification, respectively. Nodule detections were performed through faster R-CNN on efficiently-learned features from CMixNet and U-Net like encoder–decoder architecture. Classification of the nodules was performed through a gradient boosting machine (GBM) on the learned features from the designed 3D CMixNet structure. To reduce false positives and misdiagnosis results due to different types of errors, the final decision was performed in connection with physiological symptoms and clinical biomarkers. With the advent of the internet of things (IoT) and electro-medical technology, wireless body area networks (WBANs) provide continuous monitoring of patients, which helps in diagnosis of chronic diseases—especially metastatic cancers. The deep learning model for nodules’ detection and classification, combined with clinical factors, helps in the reduction of misdiagnosis and false positive (FP) results in early-stage lung cancer diagnosis. The proposed system was evaluated on LIDC-IDRI datasets in the form of sensitivity (94%) and specificity (91%), and better results were obatined compared to the existing methods.
In the field of ophthalmology, diabetic retinopathy (DR) is a major cause of blindness. DR is based on retinal lesions including exudate. Exudates have been found to be one of the signs and serious DR anomalies, so the proper detection of these lesions and the treatment should be done immediately to prevent loss of vision. In this paper, pretrained convolutional neural network- (CNN-) based framework has been proposed for the detection of exudate. Recently, deep CNNs were individually applied to solve the specific problems. But, pretrained CNN models with transfer learning can utilize the previous knowledge to solve the other related problems. In the proposed approach, initially data preprocessing is performed for standardization of exudate patches. Furthermore, region of interest (ROI) localization is used to localize the features of exudates, and then transfer learning is performed for feature extraction using pretrained CNN models (Inception-v3, Residual Network-50, and Visual Geometry Group Network-19). Moreover, the fused features from fully connected (FC) layers are fed into the softmax classifier for exudate classification. The performance of proposed framework has been analyzed using two well-known publicly available databases such as e-Ophtha and DIARETDB1. The experimental results demonstrate that the proposed pretrained CNN-based framework outperforms the existing techniques for the detection of exudates.
Diabetic retinopathy (DR) is a fast-spreading disease across the globe, which is caused by diabetes. The DR may lead the diabetic patients to complete vision loss. In this scenario, early identification of DR is more essential to recover the eyesight and provide help for timely treatment. The detection of DR can be manually performed by ophthalmologists and can also be done by an automated system. In the manual system, analysis and explanation of retinal fundus images need ophthalmologists, which is a timeconsuming and very expensive task, but in the automated system, artificial intelligence is used to perform an imperative role in the area of ophthalmology and specifically in the early detection of diabetic retinopathy over the traditional detection approaches. Recently, numerous advanced studies related to the identification of DR have been reported. This paper presents a detailed review of the detection of DR with three major aspects; retinal datasets, DR detection methods, and performance evaluation metrics. Furthermore, this study also covers the author's observations and provides future directions in the field of diabetic retinopathy to overcome the research challenges for the research community. INDEX TERMS Artificial intelligence, deep learning, diabetic retinopathy, fundus images, machine learning, ophthalmology.
Abstract-Optical analysis techniques are used recently to detect and identify the objects from a large scale of images. Hyperspectral imaging technique is also one of them. Vision of human eye is based on three basic color (red, green and blue) bands, but spectral imaging divides the vision into many more bands. Hyperspectral remote sensors achieve imagery data in the form of hundreds of adjoining spectral bands. In this paper, our purpose is to illustrate the fundamental concept, hyperspectral remote sensing, remotely sensed information, methods for hyperspectral imaging and applications based on hyperspectral imaging. Moreover, in the forensic context, the novel methods involving deep neural networks are elaborated in this paper. The proposed idea can be useful for further research in the field of hyperspectral imaging using deep learning.
Image-based object recognition is a well-studied topic in the field of computer vision. Features extraction for hand-drawn sketch recognition and retrieval become increasingly popular among the computer vision researchers. Increasing use of touchscreens and portable devices raised the challenge for computer vision community to access the sketches more efficiently and effectively. In this article, a novel deep convolutional neural network-based (DCNN) framework for hand-drawn sketch recognition, which is composed of three wellknown pre-trained DCNN architectures in the context of transfer learning with global average pooling (GAP) strategy is proposed. First, an augmented-variants of natural images was generated and sum-up with TU-Berlin sketch images to all its corresponding 250 sketch object categories. Second, the features maps were extracted by three asymmetry DCNN architectures namely, Visual Geometric Group Network (VGGNet), Residual Networks (ResNet) and Inception-v3 from input images. Finally, the distinct features maps were concatenated and the features reductions were carried out under GAP layer. The resulting feature vector was fed into the softmax classifier for sketch classification results. The performance of proposed framework is comprehensively evaluated on augmented-variants TU-Berlin sketch dataset for sketch classification and retrieval task. Experimental outcomes reveal that the proposed framework brings substantial improvements over the state-of-the-art methods for sketch classification and retrieval.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.