Background and objectives: Spectral Domain Optical Coherence Tomography (SD-OCT) is a volumetric imaging technique that allows measuring patterns between layers such as small amounts of fluid. Since 2012, automatic medical image analysis performance has steadily increased through the use of deep learning models that automatically learn relevant features for specific tasks, instead of designing visual features manually. Nevertheless, providing insights and interpretation of the predictions made by the model is still a challenge. This paper describes a deep learning model able to detect medically interpretable information in relevant images from a volume to classify diabetes-related retinal diseases.Methods: This article presents a new deep learning model, OCT-NET, which is a customized convolutional neural network for processing scans extracted from optical coherence tomography volumes. OCT-NET is applied to the classification of three conditions seen in SD-OCT volumes. Additionally, the proposed model includes a feedback stage that highlights the areas of the scans to support the interpretation of the results. This information is potentially useful for a medical specialist while assessing the prediction produced by the model.
Results: The proposed model was tested on the public SERI-CUHK and A2A SD-OCT data sets containing healthy, diabetic retinopathy, diabetic macular edema and age-related macular degeneration. The experimental evaluation shows that the proposed method outperforms conventional convolutional deep learning models from the state of the art reported on the SERI+CUHK and A2A SD-OCT data sets with a precision of 93% and an area under the ROC curve (AUC) of 0.99 respectively.
Conclusions:The proposed method is able to classify the three studied retinal diseases with high accuracy. One advantage of the method is its ability to produce interpretable clinical information in the form of highlighting the regions of the image that most contribute to the classifier decision.
The digitalization of clinical workflows and the increasing performance of deep learning algorithms are paving the way towards new methods for tackling cancer diagnosis. However, the availability of medical specialists to annotate digitized images and free-text diagnostic reports does not scale with the need for large datasets required to train robust computer-aided diagnosis methods that can target the high variability of clinical cases and data produced. This work proposes and evaluates an approach to eliminate the need for manual annotations to train computer-aided diagnosis tools in digital pathology. The approach includes two components, to automatically extract semantically meaningful concepts from diagnostic reports and use them as weak labels to train convolutional neural networks (CNNs) for histopathology diagnosis. The approach is trained (through 10-fold cross-validation) on 3’769 clinical images and reports, provided by two hospitals and tested on over 11’000 images from private and publicly available datasets. The CNN, trained with automatically generated labels, is compared with the same architecture trained with manual labels. Results show that combining text analysis and end-to-end deep neural networks allows building computer-aided diagnosis tools that reach solid performance (micro-accuracy = 0.908 at image-level) based only on existing clinical data without the need for manual annotations.
Diabetic macular edema (DME) is one of the most common eye complication caused by diabetes mellitus, resulting in partial or total loss of vision. Optical Coherence Tomography (OCT) volumes have been widely used to diagnose different eye diseases, thanks to their sensitivity to represent small amounts of fluid, thickness between layers and swelling. However, the lack of tools for automatic image analysis for supporting disease diagnosis is still a problem. Convolutional neural networks (CNNs) have shown outstanding performance when applied to several medical images analysis tasks. This paper presents a model, OCT-NET, based on a CNN for the automatic classification of OCT volumes. The model was evaluated on a dataset of OCT volumes for DME diagnosis using a leave-one-out cross-validation strategy obtaining an accuracy, sensitivity, and specificity of 93.75%.
Diabetic macular edema is one of the leading causes of legal blindness worldwide. Early, and accessible, detection of ophthalmological diseases is especially important in developing countries, where there are major limitations to access to specialized medical diagnosis and treatment. Deep learning models, such as deep convolutional neural networks have shown great success in different computer vision tasks. In medical images they have been also applied with great success. The present paper presents a novel strategy based on convolutional neural networks to combine exudates localization and eye fundus images for automatic classification of diabetic macular edema as a support for diabetic retinopathy diagnosis.
Abstract. Medulloblastoma (MB) is a type of brain cancer that represent roughly 25% of all brain tumors in children. In the anaplastic medulloblastoma subtype, it is important to identify the degree of irregularity and lack of organizations of cells as this correlates to disease aggressiveness and is of clinical value when evaluating patient prognosis. This paper presents an image representation to distinguish these subtypes in histopathology slides. The approach combines learned features from (i) an unsupervised feature learning method using topographic independent component analysis that captures scale, color and translation invariances, and (ii) learned linear combinations of Riesz wavelets calculated at several orders and scales capturing the granularity of multiscale rotation-covariant information. The contribution of this work is to show that the combination of two complementary approaches for feature learning (unsupervised and supervised) improves the classification performance. Our approach outperforms the best methods in literature with statistical significance, achieving 99% accuracy over region-based data comprising 7,500 square regions from 10 patient studies diagnosed with medulloblastoma (5 anaplastic and 5 non-anaplastic).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.