2021
DOI: 10.1002/dmrr.3445
|View full text |Cite
|
Sign up to set email alerts
|

Deep learning‐based detection and stage grading for optimising diagnosis of diabetic retinopathy

Abstract: Aims To establish an automated method for identifying referable diabetic retinopathy (DR), defined as moderate nonproliferative DR and above, using deep learning‐based lesion detection and stage grading. Materials and Methods A set of 12,252 eligible fundus images of diabetic patients were manually annotated by 45 licenced ophthalmologists and were randomly split into training, validation, and internal test sets (ratio of 7:1:2). Another set of 565 eligible consecutive clinical fundus images was established as… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
18
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
8

Relationship

1
7

Authors

Journals

citations
Cited by 19 publications
(18 citation statements)
references
References 38 publications
(78 reference statements)
0
18
0
Order By: Relevance
“…They reported SE values of 60.7%, 49.5%, 28.3%, 36.3%, 57.3%, 8.7%, 79.8%, and 0.164 over PHE, Ex, VHE, NV, CWS, FIP, IHE, MA, respectively. Quellec et al [ 81 ] focused on four lesions CWS, Ex, HE, and MA using a predefined DCNN architecture named o-O solution and reported the values of 62.4%, 52.2%, 44.9%, and 31.6% over CWS, Ex, HE, and MA for SE, respectively, which shows a slightly better performance for CWS and Ex than Wang et al [ 140 ] and considerably better on MA than Wang et al [ 141 ]. On the other hand, Wang et al [ 141 ] performed better in HE detection.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…They reported SE values of 60.7%, 49.5%, 28.3%, 36.3%, 57.3%, 8.7%, 79.8%, and 0.164 over PHE, Ex, VHE, NV, CWS, FIP, IHE, MA, respectively. Quellec et al [ 81 ] focused on four lesions CWS, Ex, HE, and MA using a predefined DCNN architecture named o-O solution and reported the values of 62.4%, 52.2%, 44.9%, and 31.6% over CWS, Ex, HE, and MA for SE, respectively, which shows a slightly better performance for CWS and Ex than Wang et al [ 140 ] and considerably better on MA than Wang et al [ 141 ]. On the other hand, Wang et al [ 141 ] performed better in HE detection.…”
Section: Resultsmentioning
confidence: 99%
“…Quellec et al 2017[81] focused on four lesions CWS, Ex, HE and MA using a predefined DCNN architecture named o-O solution and reported the values of 62.4 %, 52.2 %, 44.9 % and 31.6 % over CWS, Ex, HE and MA for SE respectively which shows slightly better performance for CWS and Ex than Wang et al 2021[139]. Considerably better for MA than Wang et al 2021[139]. On the other hand, Wang et al performed better in HE detection.…”
mentioning
confidence: 99%
“…In our previous studies, during the DR reading training of the AI dataset, we calculated the overall kappa scores for doctors of different seniorities. Seventeen attendings and six consultants in the fundus speciality read 20,503 fundus photographs, and the overall kappa scores were 0.67 for attendings and 0.71 for consultants [ 18 , 21 ]. In our training, the overall kappa score was elevated from 0.67 to 0.81, with a higher score than the attendings and consultants, despite the trainees’ lower levels of professional training.…”
Section: Discussionmentioning
confidence: 99%
“…In recent years, because of the rapid development of artificial intelligence (AI) techniques, AI techniques based on machine learning play a significant role in DR screening, which acquires high sensitivity and specificity through the learning of a large number of fundus photo training data sets [ 10 18 ]. But the fundus photo training data sets needed manual annotation by qualified specialists, and the AI reading results also needed to be confirmed by retina experts.…”
Section: Introductionmentioning
confidence: 99%
“…The deep learning classification technique makes use of the ResNet v2 CNN architecture ( 24 ), which was trained on tiny patches taken from the entire ear endoscopies before being applied to the complete ear images. A total of four deep learning models were trained for autonomous ascribable diabetic retinopathy detection, dependent on whether or not two criteria were included: DR-related lesions and diabetic retinopathy staging ( 25 ). Table 1 presents a summary of selected works.…”
Section: Related Workmentioning
confidence: 99%