Deep reinforcement learning (DRL) has proven to be an effective tool for creating general video-game AI. However most current DRL video-game agents learn end-to-end from the video-output of the game, which is superfluous for many applications and creates a number of additional problems. More importantly, directly working on pixel-based raw video data is substantially distinct from what a human player does. In this paper, we present a novel method which enables DRL agents to learn directly from object information. This is obtained via use of an object embedding network (OEN) that compresses a set of object feature vectors of different lengths into a single fixedlength unified feature vector representing the current game-state and fulfills the DRL simultaneously. We evaluate our OEN-based DRL agent by comparing to several state-of-the-art approaches on a selection of games from the GVG-AI Competition. Experimental results suggest that our object-based DRL agent yields performance comparable to that of those approaches used in our comparative study.
IntroductionInherited retinal diseases (IRD) are a leading cause of visual impairment and blindness in the working age population. Mutations in over 300 genes have been found to be associated with IRDs and identifying the affected gene in patients by molecular genetic testing is the first step towards effective care and patient management. However, genetic diagnosis is currently slow, expensive and not widely accessible. The aim of the current project is to address the evidence gap in IRD diagnosis with an AI algorithm, Eye2Gene, to accelerate and democratise the IRD diagnosis service.Methods and analysisThe data-only retrospective cohort study involves a target sample size of 10 000 participants, which has been derived based on the number of participants with IRD at three leading UK eye hospitals: Moorfields Eye Hospital (MEH), Oxford University Hospital (OUH) and Liverpool University Hospital (LUH), as well as a Japanese hospital, the Tokyo Medical Centre (TMC). Eye2Gene aims to predict causative genes from retinal images of patients with a diagnosis of IRD. For this purpose, 36 most common causative IRD genes have been selected to develop a training dataset for the software to have enough examples for training and validation for detection of each gene. The Eye2Gene algorithm is composed of multiple deep convolutional neural networks, which will be trained on MEH IRD datasets, and externally validated on OUH, LUH and TMC.Ethics and disseminationThis research was approved by the IRB and the UK Health Research Authority (Research Ethics Committee reference 22/WA/0049) ‘Eye2Gene: accelerating the diagnosis of IRDs’ Integrated Research Application System (IRAS) project ID: 242050. All research adhered to the tenets of the Declaration of Helsinki. Findings will be reported in an open-access format.
Rare eye diseases such as inherited retinal diseases (IRDs) are challenging to diagnose genetically. IRDs are typically monogenic disorders and represent a leading cause of blindness in children and working-age adults worldwide. A growing number are now being targeted in clinical trials, with approved treatments increasingly available. However, access requires a genetic diagnosis to be established sufficiently early. Critically, the timely identification of a genetic cause remains challenging. We demonstrate that a deep-learning algorithm, Eye2Gene, trained on the largest imaging dataset of patients with IRDs currently available, provides expert-level accuracy for genetic diagnosis for the 36 most common molecular causes (top-5 accuracy = 85.6%). This algorithm has been deployed online (app.eye2gene.com) and externally validated on data provided by four different clinical centers. Eye2Gene can facilitate access to diagnostic expertise, only currently available in a limited number of specialist centers globally, and thereby dramatically accelerate the genetic diagnostic odyssey.
Purpose: Inherited retinal diseases (IRDs) are single‐gene disorders caused by genetic mutations in any one of over 270 genes. Identifying the causative gene through genetic testing is crucial for gene targeted treatments, recruitment to clinical trials, prognosis and family planning. The prescription and interpretation of genetic results requires phenotype–genotype recognition that only IRD experts can provide hence this has motivated AI approaches that are able to predict the probable IRD causative gene from the retinal scans of suspected IRD patients. However, these AI approaches are currently “black boxes” that do not offer any clinical interpretability nor do they provide fine‐grained phenotypic information which is essential for prognosis. We therefore sought to develop an AI algorithm capable of automatically identifying and quantifying IRD‐specific features in retinal scans. Methods: In order to build a training dataset for the AI algorithm, a grading protocol was drafted defining retinal features important in the identification of the 36 most common IRD genes. Optical Coherence Tomography (OCT) and Fundus Autofluorescence (FAF) scans were manually segmented by four graders over three rounds of grading including adjudication, feedback and protocol clarification where required. This iterative process was followed to assure good agreement, assessed using the dice score metric, between features. Features that were too difficult or laborious to annotate were converted to labels. Using the manually segmented data, an AI algorithm known as a U‐net, was trained to automatically segment 15 features. The number, size and brightness of the automatically identified features was quantified and compared across the 36 gene classes. Results: A total of 3527 scan‐features were manually annotated across scans of 36 genes. The inter‐grader dice scores ranged from 0.30 to 0.91 with an average 0.54. Features with the best agreement were anatomical features such as whole retina on OCT (0.91) and optic disc on FAF (0.87), and yielded the best predictions by the segmentation model 0.87 and 0.82, respectively. Pathological features with good inter‐grader agreement included ellipsoid zone loss (0.78) and hypo autofluorescence (0.65) and those with poorer inter‐grader agreement included hyper autofluorescence (0.30) and retinal pigment epithelium loss (0.42). The segmentation model achieved an average dice score of 0.71 across all features. Statistically significant differences were found in feature count, size and brightness between the 36 different gene classes and prediction accuracy of 58% was achieved using random forests to predict the correct causative gene from 36 genes using these features. Using a “black box” AI approach the gene prediction accuracy was 66%. Conclusions: Automated segmentation of features in IRD scans using AI is feasible and leads to interpretable prediction of disease‐associated IRD genes. However, “black box” prediction can still achieve higher accuracy at the expense of interpretability. Fur...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.