Background:Advances in image analysis and computational techniques have facilitated automatic detection of critical features in histopathology images. Detection of nuclei is critical for squamous epithelium cervical intraepithelial neoplasia (CIN) classification into normal, CIN1, CIN2, and CIN3 grades.Methods:In this study, a deep learning (DL)-based nuclei segmentation approach is investigated based on gathering localized information through the generation of superpixels using a simple linear iterative clustering algorithm and training with a convolutional neural network.Results:The proposed approach was evaluated on a dataset of 133 digitized histology images and achieved an overall nuclei detection (object-based) accuracy of 95.97%, with demonstrated improvement over imaging-based and clustering-based benchmark techniques.Conclusions:The proposed DL-based nuclei segmentation Method with superpixel analysis has shown improved segmentation results in comparison to state-of-the-art methods.
In this paper, we propose a method for localizing the optic nerve head and segmenting the optic disc/cup in retinal fundus images. The approach is based on a simple two-stage Mask-RCNN compared to sophisticated methods that represent the state-of-the-art in the literature. In the first stage, we detect and crop around the optic nerve head then feed the cropped image as input for the second stage. The second stage network is trained using a weighted loss to produce the final segmentation. To further improve the detection in the first stage, we propose a new fine-tuning strategy by combining the cropping output of the first stage with the original training image to train a new detection network using different scales for the region proposal network anchors. We evaluate the method on Retinal Fundus Images for Glaucoma Analysis (REFUGE), Magrabi, and MESSIDOR datasets. We used the REFUGE training subset to train the models in the proposed method. Our method achieved 0.0430 mean absolute error in the vertical cup-to-disc ratio (MAE vCDR) on the REFUGE test set compared to 0.0414 obtained using complex and multiple ensemble networks methods. The models trained with the proposed method transfer well to datasets outside REFUGE, achieving a MAE vCDR of 0.0785 and 0.077 on MESSIDOR and Magrabi datasets, respectively, without being retrained. In terms of detection accuracy, the proposed new fine-tuning strategy improved the detection rate from 96.7% to 98.04% on MESSIDOR and from 93.6% to 100% on Magrabi datasets compared to the reported detection rates in the literature.
Cervical cancer is the second most common cancer affecting women worldwide but is curable if diagnosed early. Routinely, expert pathologists visually examine histology slides for assessing cervix tissue abnormalities. A localized, fusion-based, hybrid imaging and deep learning approach is explored to classify squamous epithelium into cervical intraepithelial neoplasia (CIN) grades for a dataset of 83 digitized histology images. Partitioning the epithelium region into 10 vertical segments, 27 handcrafted image features and rectangular patch, sliding window-based convolutional neural network features are computed for each segment. The imaging and deep learning patch features are combined and used as inputs to a secondary classifier for individual segment and whole epithelium classification. The hybrid method achieved a 15.51% and 11.66% improvement over the deep learning and imaging approaches alone, respectively, with a 80.72% whole epithelium CIN classification accuracy, showing the enhanced epithelium CIN classification potential of fusing image and deep learning features.
The multi-label classification problem in Unmanned Aerial Vehicle (UAV) images is particularly challenging compared to single-label classification due to its combinatorial nature. To tackle this issue, we propose in this paper a deep learning approach based on encoder-decoder neural network architecture with channel and spatial attention mechanisms. Specifically, the encoder module which is based on a pretrained convolutional neural network (CNN) has the task to transform the input image to a set of feature maps using an opportune feature combination. To improve the feature representation further, this module incorporates a squeeze excitation (SE) layer for modelling the interdependencies between the channels of the feature maps. The decoder module which is based on a long short terms memory (LSTM) network has the task of generating, in a sequential way, the classes present in the image. At each time step, it predicts the next class-label by aligning its hidden state to the corresponding region in the image by means of an adaptive spatial attention mechanism. The experiments carried out on two UAV datasets with a spatial resolution of 2-cm show that our method is promising in predicting the labels present in the image while attending the relevant objects in the image. Additionally, it is able to provide better classification results compared to state-of-the-art methods. INDEX TERMS UAV imagery, deep learning, attention neural network, multi-label image classification.
A fuzzy logic-based color histogram analysis technique is presented for discriminating benign skin lesions from malignant melanomas in dermoscopy images. The approach extends previous research for utilizing a fuzzy set for skin lesion color for a specified class of skin lesions, using alpha-cut and support set cardinality for quantifying a fuzzy ratio skin lesion color feature. Skin lesion discrimination results are reported for the fuzzy clustering ratio over different regions of the lesion over a data set of 517 dermoscopy images consisting of 175 invasive melanomas and 342 benign lesions. Experimental results show that the fuzzy clustering ratio applied over an eight-connected neighborhood on the outer 25% of the skin lesion with an alpha-cut of 0.08 can recognize 92.6% of melanomas with approximately 13.5% false positive lesions. These results show the critical importance of colors in the lesion periphery. Our fuzzy logic-based description of lesion colors offers relevance to clinical descriptions of malignant melanoma.
In this paper, we propose a novel end-to-end learnable architecture based on Dense Convolutional Networks (DCN) for the classification of electrocardiogram (ECG) signals. This architecture is based on two main modules: the first is a generative module and the second is a discriminative one. The task of the generative module is to convert the one dimensional ECG signal into an image by means of fully connected, up-sampling, and convolution layers. The discriminative module takes as input the generated image and carries out feature learning and classification. To handle the data imbalance problem characterizing the ECG data, we propose to use the focal loss (FL) that is based on the idea of reshaping the standard cross-entropy loss such that it reduces the loss assigned to well-classified ECG beats. In the experiments, we validate the method using the well-known MIT-BIH arrhythmia database in four different scenarios, using four classes in the first scenario, five in the second and 12 in the third. Finally, supraventricular versus the other three and ventricular versus the other three from the scenario with four classes are used as the fourth scenario. The results obtained show that the method proposed here achieves a significant accuracy improvement over all previous state-of-the-art methods. INDEX TERMS Generative, discriminative, ECG, classification, arrhythmia. NAIF ALAJLAN (M'11-SM'13) received the B.Sc. (Hons.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.