Glaucoma is a degenerative disease that affects vision, causing damage to the optic nerve that ends in vision loss. The classic techniques to detect it have undergone a great change since the intrusion of machine learning techniques into the processing of eye fundus images. Several works focus on training a convolutional neural network (CNN) by brute force, while others use segmentation and feature extraction techniques to detect glaucoma. In this work, a diagnostic aid tool to detect glaucoma using eye fundus images is developed, trained and tested. It consists of two subsystems that are independently trained and tested, combining their results to improve glaucoma detection. The first subsystem applies machine learning and segmentation techniques to detect optic disc and cup independently, combine them and extract their physical and positional features. The second one applies transfer learning techniques to a pre-trained CNN to detect glaucoma through the analysis of the complete eye fundus images. The results of both systems are combined to discriminate positive cases of glaucoma and improve final detection. The results show that this system achieves a higher classification rate than previous works. The system also provides information on the basis for the proposed diagnosis suggestion that can help the ophthalmologist to accept or modify it. INDEX TERMS Glaucoma, Ensemble networks, Medical diagnostic aids, medical imaging, explainable AI.
Medical images from different clinics are acquired with different instruments and settings. To perform segmentation on these images as a cloud-based service we need to train with multiple datasets to increase the segmentation independency from the source. We also require an efficient and fast segmentation network. In this work these two problems, which are essential for many practical medical imaging applications, are studied. As a segmentation network, U-Net has been selected. U-Net is a class of deep neural networks which have been shown to be effective for medical image segmentation. Many different U-Net implementations have been proposed. With the recent development of tensor processing units (TPU), the execution times of these algorithms can be drastically reduced. This makes them attractive for cloud services. In this paper, we study, using Google's publicly available colab environment, a generalized fully configurable Keras U-Net implementation which uses Google TPU processors for training and prediction. As our application problem, we use the segmentation of Optic Disc and Cup, which can be applied to glaucoma detection. To obtain networks with a good performance, independently of the image acquisition source, we combine multiple publicly available datasets (RIM-One V3, DRISHTI and DRIONS). As a result of this study, we have developed a set of functions that allow the implementation of generalized U-Nets adapted to TPU execution and are suitable for cloud-based service implementation.
Falls are the most common cause of fatal injuries in elderly people, causing even death if there is no immediate assistance. Fall detection systems can be used to alert and request help when this type of accident happens. Certain types of these systems include wearable devices that analyze bio-medical signals from the person carrying it in real time. In this way, Deep Learning algorithms could automate and improve the detection of unintentional falls by analyzing these signals. These algorithms have proven to achieve high effectiveness with competitive performances in many classification problems. This work aims to study 16 Recurrent Neural Networks architectures (using Long Short-Term Memory and Gated Recurrent Units) for falls detection based on accelerometer data, reducing computational requirements of previous research. The architectures have been tested on a labeled version of the publicly available SisFall dataset, achieving a mean F1-score above 0.73 and improving state-of-the-art solutions in terms of network complexity.
Falls are one of the leading causes of permanent injury and/or disability among the elderly. When these people live alone, it is convenient that a caregiver or family member visits them periodically. However, these visits do not prevent falls when the elderly person is alone. Furthermore, in exceptional circumstances, such as a pandemic, we must avoid unnecessary mobility. This is why remote monitoring systems are currently on the rise, and several commercial solutions can be found. However, current solutions use devices attached to the waist or wrist, causing discomfort in the people who wear them. The users also tend to forget to wear the devices carried in these positions. Therefore, in order to prevent these problems, the main objective of this work is designing and recollecting a new dataset about falls, falling risks and activities of daily living using an ankle-placed device obtaining a good balance between the different activity types. This dataset will be a useful tool for researchers who want to integrate the fall detector in the footwear. Thus, in this work we design the fall-detection device, study the suitable activities to be collected, collect the dataset from 21 users performing the studied activities and evaluate the quality of the collected dataset. As an additional and secondary study, we implement a simple Deep Learning classifier based on this data to prove the system’s feasibility.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.