Artificial intelligence technology has advanced rapidly in recent years and has the potential to improve healthcare outcomes. However, technology uptake will be largely driven by clinicians, and there is a paucity of data regarding the attitude that clinicians have to this new technology. In June–August 2019 we conducted an online survey of fellows and trainees of three specialty colleges (ophthalmology, radiology/radiation oncology, dermatology) in Australia and New Zealand on artificial intelligence. There were 632 complete responses (n = 305, 230, and 97, respectively), equating to a response rate of 20.4%, 5.1%, and 13.2% for the above colleges, respectively. The majority (n = 449, 71.0%) believed artificial intelligence would improve their field of medicine, and that medical workforce needs would be impacted by the technology within the next decade (n = 542, 85.8%). Improved disease screening and streamlining of monotonous tasks were identified as key benefits of artificial intelligence. The divestment of healthcare to technology companies and medical liability implications were the greatest concerns. Education was identified as a priority to prepare clinicians for the implementation of artificial intelligence in healthcare. This survey highlights parallels between the perceptions of different clinician groups in Australia and New Zealand about artificial intelligence in medicine. Artificial intelligence was recognized as valuable technology that will have wide-ranging impacts on healthcare.
OBJECTIVEThe goal of this study was to describe the development and validation of an artificial intelligence-based, deep learning algorithm (DLA) for the detection of referable diabetic retinopathy (DR). RESEARCH DESIGN AND METHODSA DLA using a convolutional neural network was developed for automated detection of vision-threatening referable DR (preproliferative DR or worse, diabetic macular edema, or both). The DLA was tested by using a set of 106,244 nonstereoscopic retinal images. A panel of ophthalmologists graded DR severity in retinal photographs included in the development and internal validation data sets (n = 71,043); a reference standard grading was assigned once three graders achieved consistent grading outcomes. For external validation, we tested our DLA using 35,201 images of 14,520 eyes (904 eyes with any DR; 401 eyes with vision-threatening referable DR) from population-based cohorts of Malays, Caucasian Australians, and Indigenous Australians. RESULTSAmong the 71,043 retinal images in the training and validation data sets, 12,329 showed vision-threatening referable DR. In the internal validation data set, the area under the curve (AUC), sensitivity, and specificity of the DLA for vision-threatening referable DR were 0.989, 97.0%, and 91.4%, respectively. Testing against the independent, multiethnic data set achieved an AUC, sensitivity, and specificity of 0.955, 92.5%, and 98.5%, respectively. Among false-positive cases, 85.6% were due to a misclassification of mild or moderate DR. Undetected intraretinal microvascular abnormalities accounted for 77.3% of all false-negative cases. CONCLUSIONSThis artificial intelligence-based DLA can be used with high accuracy in the detection of vision-threatening referable DR in retinal images. This technology offers potential to increase the efficiency and accessibility of DR screening programs.
The purpose of this study is to evaluate the feasibility and patient acceptability of a novel artificial intelligence (AI)-based diabetic retinopathy (DR) screening model within endocrinology outpatient settings. Adults with diabetes were recruited from two urban endocrinology outpatient clinics and single-field, non-mydriatic fundus photographs were taken and graded for referable DR ( ≥ pre-proliferative DR). Each participant underwent; (1) automated screening model; where a deep learning algorithm (DLA) provided real-time reporting of results; and (2) manual model where retinal images were transferred to a retinal grading centre and manual grading outcomes were distributed to the patient within 2 weeks of assessment. Participants completed a questionnaire on the day of examination and 1-month following assessment to determine overall satisfaction and the preferred model of care. In total, 96 participants were screened for DR and the mean assessment time for automated screening was 6.9 minutes. Ninety-six percent of participants reported that they were either satisfied or very satisfied with the automated screening model and 78% reported that they preferred the automated model over manual. The sensitivity and specificity of the DLA for correct referral was 92.3% and 93.7%, respectively. AI-based DR screening in endocrinology outpatient settings appears to be feasible and well accepted by patients.
Purpose To describe the development and validation of a smartphone-based visual acuity (VA) test called Vision at home (V@home). Methods Three study populations (elderly Chinese, adolescent Chinese, and Australian groups) underwent distance and near VA testing using standard Early Treatment Diabetic Retinopathy Study (ETDRS) charts and the V@home device; all VA tests used tumbling E optotypes. VA tests were repeated with one eye, selected randomly. Distance VA was measured monocularly at 2 m, and near VA was measured binocularly at 40 cm. Participants also completed a questionnaire about their satisfaction with the device. V@home VA (logMAR) was compared to VA for ETDRS charts at distance and near and test-retest reliability. Results The mean difference between V@home and ETDRS distance VA across all groups ranged from −0.010 to −0.100 logMAR. Tolerant weighted kappa (TWK) agreement ranged from substantial (0.742) in the Australian group to almost perfect (0.950) in the adolescent Chinese group. There was high agreement of V@home with near ETDRS VA across all groups, with a mean difference of −0.092 to −0.042 logMAR and a TWK of 0.736 to 0.837. Test-retest reliability was also high (difference: −0.018 to 0.026) for both distance and near VA tests (95% limits of agreement: −0.289 to 0.258 for distance and −0.235 to 0.199 for near). The majority of participants were satisfied with V@home. Conclusions V@home could accurately and reliably measure both distance and near VA and is well accepted by participants. Translational Relevance The V@home system could potentially serve as a useful tool to improve eye care accessibility, especially in underdeveloped areas with limited eye care personnel and resources.
The purpose of this study was to develop a 3D deep learning system from spectral domain optical coherence tomography (SD-OCT) macular cubes to differentiate between referable and nonreferable cases for glaucoma applied to real-world datasets to understand how this would affect the performance.Methods: There were 2805 Cirrus optical coherence tomography (OCT) macula volumes (Macula protocol 512 × 128) of 1095 eyes from 586 patients at a single site that were used to train a fully 3D convolutional neural network (CNN). Referable glaucoma included true glaucoma, pre-perimetric glaucoma, and high-risk suspects, based on qualitative fundus photographs, visual fields, OCT reports, and clinical examinations, including intraocular pressure (IOP) and treatment history as the binary (two class) ground truth. The curated real-world dataset did not include eyes with retinal disease or nonglaucomatous optic neuropathies. The cubes were first homogenized using layer segmentation with the Orion Software (Voxeleron) to achieve standardization. The algorithm was tested on two separate external validation sets from different glaucoma studies, comprised of Cirrus macular cube scans of 505 and 336 eyes, respectively. Results:The area under the receiver operating characteristic (AUROC) curve for the development dataset for distinguishing referable glaucoma was 0.88 for our CNN using homogenization, 0.82 without homogenization, and 0.81 for a CNN architecture from the existing literature. For the external validation datasets, which had different glaucoma definitions, the AUCs were 0.78 and 0.95, respectively. The performance of the model across myopia severity distribution has been assessed in the dataset from the United States and was found to have an AUC of 0.85, 0.92, and 0.95 in the severe, moderate, and mild myopia, respectively.Conclusions: A 3D deep learning algorithm trained on macular OCT volumes without retinal disease to detect referable glaucoma performs better with retinal segmentation preprocessing and performs reasonably well across all levels of myopia.Translational Relevance: Interpretation of OCT macula volumes based on normative data color distributions is highly influenced by population demographics and characteristics, such as refractive error, as well as the size of the normative database. Referable glaucoma, in this study, was chosen to include cases that should be seen by a specialist. This study is unique because it uses multimodal patient data for the glaucoma definition, and includes all severities of myopia as well as validates the algorithm with international data to understand generalizability potential.
This study investigated the diagnostic performance, feasibility, and end-user experiences of an artificial intelligence (AI)-assisted diabetic retinopathy (DR) screening model in real-world Australian healthcare settings. The study consisted of two components: (1) DR screening of patients using an AI-assisted system and (2) in-depth interviews with health professionals involved in implementing screening. Participants with type 1 or type 2 diabetes mellitus attending two endocrinology outpatient and three Aboriginal Medical Services clinics between March 2018 and May 2019 were invited to a prospective observational study. A single 45-degree (macula centred), non-stereoscopic, colour retinal image was taken of each eye from participants and were instantly screened for referable DR using a custom offline automated AI system. A total of 236 participants, including 174 from endocrinology and 62 from Aboriginal Medical Services clinics, provided informed consent and 203 (86.0%) were included in the analysis. A total of 33 consenting participants (14%) were excluded from the primary analysis due to ungradable or missing images from small pupils (n = 21, 63.6%), cataract (n = 7, 21.2%), poor fixation (n = 2, 6.1%), technical issues (n = 2, 6.1%), and corneal scarring (n = 1, 3%). The area under the curve, sensitivity, and specificity of the AI system for referable DR were 0.92, 96.9% and 87.7%, respectively. There were 51 disagreements between the reference standard and index test diagnoses, including 29 which were manually graded as ungradable, 21 false positives, and one false negative. A total of 28 participants (11.9%) were referred for follow-up based on new ocular findings, among whom, 15 (53.6%) were able to be contacted and 9 (60%) adhered to referral. Of 207 participants who completed a satisfaction questionnaire, 93.7% specified they were either satisfied or extremely satisfied, and 93.2% specified they would be likely or extremely likely to use this service again. Clinical staff involved in screening most frequently noted that the AI system was easy to use, and the real-time diagnostic report was useful. Our study indicates that AI-assisted DR screening model is accurate and well-accepted by patients and clinicians in endocrinology and indigenous healthcare settings. Future deployments of AI-assisted screening models would require consideration of downstream referral pathways.
Importance Detection of early onset neovascular age‐related macular degeneration (AMD) is critical to protecting vision. Background To describe the development and validation of a deep‐learning algorithm (DLA) for the detection of neovascular age‐related macular degeneration. Design Development and validation of a DLA using retrospective datasets. Participants We developed and trained the DLA using 56 113 retinal images and an additional 86 162 images from an independent dataset to externally validate the DLA. All images were non‐stereoscopic and retrospectively collected. Methods The internal validation dataset was derived from real‐world clinical settings in China. Gold standard grading was assigned when consensus was reached by three individual ophthalmologists. The DLA classified 31 247 images as gradable and 24 866 as ungradable (poor quality or poor field definition). These ungradable images were used to create a classification model for image quality. Efficiency and diagnostic accuracy were tested using 86 162 images derived from the Melbourne Collaborative Cohort Study. Neovascular AMD and/or ungradable outcome in one or both eyes was considered referable. Main Outcome Measures Area under the receiver operating characteristic curve (AUC), sensitivity and specificity. Results In the internal validation dataset, the AUC, sensitivity and specificity of the DLA for neovascular AMD was 0.995, 96.7%, 96.4%, respectively. Testing against the independent external dataset achieved an AUC, sensitivity and specificity of 0.967, 100% and 93.4%, respectively. More than 60% of false positive cases displayed other macular pathologies. Amongst the false negative cases (internal validation dataset only), over half (57.2%) proved to be undetected detachment of the neurosensory retina or RPE layer. Conclusions and Relevance This DLA shows robust performance for the detection of neovascular AMD amongst retinal images from a multi‐ethnic sample and under different imaging protocols. Further research is warranted to investigate where this technology could be best utilized within screening and research settings.
PURPOSE. We investigate the impact of parental myopia on spherical equivalent (SE) progression and axial length (AL) elongation. METHODS. Children and their parents were invited for annual examinations from 2006 (baseline). Cycloplegic autorefraction and AL were measured at each visit. Parental refractive status was determined using refraction data from their baseline visit. Children were classified into five groups: no myopic parents (non-non), only one moderately myopic parent (nonmoderate), only one highly myopic parent (non-high), two moderately myopic parents (moderate-moderate), and one moderately myopic or more severe and one highly myopic parent (moderate-high/high-high). The relationship between progression of SE and AL with parental refractive status was estimated by linear mixed-effects models. Data from 2006 to 2017 were analyzed in the current study. RESULTS. A total of 1831 children were enrolled (mean age, 11 6 2.7 years; mean standard error, À0.49 6 2.16 diopters [D] at baseline. Myopia progressed faster for children with parental myopia (non-non group as reference, all P < 0.05), while AL elongation mirrored the change in SE (all P < 0.001 except for non-mod group P ¼ 0.12). As for the age-specific change in SE and AL, children in the mod-high/high-high group presented with the fastest progression. Children with highly myopic parents were at higher risks of being highly myopic during adulthood (odds ratio ¼ 13.98 and 25.71 for non-high and mod-high/high-high groups; both P < 0.001). CONCLUSIONS. SE progresses and AL elongates at a faster rate at an earlier age in children with parental myopia. Children with highly-myopic parents have higher risks of being highly myopic during adulthood.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.