Background and objectives: Spectral Domain Optical Coherence Tomography (SD-OCT) is a volumetric imaging technique that allows measuring patterns between layers such as small amounts of fluid. Since 2012, automatic medical image analysis performance has steadily increased through the use of deep learning models that automatically learn relevant features for specific tasks, instead of designing visual features manually. Nevertheless, providing insights and interpretation of the predictions made by the model is still a challenge. This paper describes a deep learning model able to detect medically interpretable information in relevant images from a volume to classify diabetes-related retinal diseases.Methods: This article presents a new deep learning model, OCT-NET, which is a customized convolutional neural network for processing scans extracted from optical coherence tomography volumes. OCT-NET is applied to the classification of three conditions seen in SD-OCT volumes. Additionally, the proposed model includes a feedback stage that highlights the areas of the scans to support the interpretation of the results. This information is potentially useful for a medical specialist while assessing the prediction produced by the model.
Results: The proposed model was tested on the public SERI-CUHK and A2A SD-OCT data sets containing healthy, diabetic retinopathy, diabetic macular edema and age-related macular degeneration. The experimental evaluation shows that the proposed method outperforms conventional convolutional deep learning models from the state of the art reported on the SERI+CUHK and A2A SD-OCT data sets with a precision of 93% and an area under the ROC curve (AUC) of 0.99 respectively.
Conclusions:The proposed method is able to classify the three studied retinal diseases with high accuracy. One advantage of the method is its ability to produce interpretable clinical information in the form of highlighting the regions of the image that most contribute to the classifier decision.
Physical exercise (PE) has become an essential tool for different rehabilitation programs. High-intensity exercises (HIEs) have been demonstrated to provide better results in general health conditions, compared with low and moderate-intensity exercises. In this context, monitoring of a patients’ condition is essential to avoid extreme fatigue conditions, which may cause physical and physiological complications. Different methods have been proposed for fatigue estimation, such as: monitoring the subject’s physiological parameters and subjective scales. However, there is still a need for practical procedures that provide an objective estimation, especially for HIEs. In this work, considering that the sit-to-stand (STS) exercise is one of the most implemented in physical rehabilitation, a computational model for estimating fatigue during this exercise is proposed. A study with 60 healthy volunteers was carried out to obtain a data set to develop and evaluate the proposed model. According to the literature, this model estimates three fatigue conditions (low, moderate, and high) by monitoring 32 STS kinematic features and the heart rate from a set of ambulatory sensors (Kinect and Zephyr sensors). Results show that a random forest model composed of 60 sub-classifiers presented an accuracy of 82.5% in the classification task. Moreover, results suggest that the movement of the upper body part is the most relevant feature for fatigue estimation. Movements of the lower body and the heart rate also contribute to essential information for identifying the fatigue condition. This work presents a promising tool for physical rehabilitation.
Physical exercise contributes to the success of rehabilitation programs and rehabilitation processes assisted through social robots. However, the amount and intensity of exercise needed to obtain positive results are unknown. Several considerations must be kept in mind for its implementation in rehabilitation, as monitoring of patients’ intensity, which is essential to avoid extreme fatigue conditions, may cause physical and physiological complications. The use of machine learning models has been implemented in fatigue management, but is limited in practice due to the lack of understanding of how an individual’s performance deteriorates with fatigue; this can vary based on physical exercise, environment, and the individual’s characteristics. As a first step, this paper lays the foundation for a data analytic approach to managing fatigue in walking tasks. The proposed framework establishes the criteria for a feature and machine learning algorithm selection for fatigue management, classifying four fatigue diagnoses states. Based on the proposed framework and the classifier implemented, the random forest model presented the best performance with an average accuracy of ≥98% and F-score of ≥93%. This model was comprised of ≤16 features. In addition, the prediction performance was analyzed by limiting the sensors used from four IMUs to two or even one IMU with an overall performance of ≥88%.
Glaucoma is an eye condition that leads to loss of vision and blindness if not diagnosed in time. Diagnosis requires human experts to estimate in a limited time subtle changes in the shape of the optic disc from retinal fundus images. Deep learning methods have been satisfactory in classifying and segmenting diseases in retinal fundus images, assisting in analyzing the increasing amount of images. Model training requires extensive annotations to achieve successful generalization, which can be highly problematic given the costly expert annotations. This work aims at designing and training a novel multi-task deep learning model that leverages the similarities of related eye-fundus tasks and measurements used in glaucoma diagnosis. The model simultaneously learns different segmentation and classification tasks, thus benefiting from their similarity. The evaluation of the method in a retinal fundus glaucoma challenge dataset, including 1200 retinal fundus images from different cameras and medical centers, obtained a $$96.76 \pm 0.96$$
96.76
±
0.96
AUC performance compared to an $$93.56 \pm 1.48$$
93.56
±
1.48
obtained by the same backbone network trained to detect glaucoma. Our approach outperforms other multi-task learning models, and its performance pairs with trained experts using $$~\sim 3.5$$
∼
3.5
times fewer parameters than training each task separately. The data and the code for reproducing our results are publicly available.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.