2021
DOI: 10.1136/bmjophth-2021-000824
|View full text |Cite
|
Sign up to set email alerts
|

Keratoconus detection of changes using deep learning of colour-coded maps

Abstract: ObjectiveTo evaluate the accuracy of convolutional neural networks technique (CNN) in detecting keratoconus using colour-coded corneal maps obtained by a Scheimpflug camera.DesignMulticentre retrospective study.Methods and analysisWe included the images of keratoconic and healthy volunteers’ eyes provided by three centres: Royal Liverpool University Hospital (Liverpool, UK), Sedaghat Eye Clinic (Mashhad, Iran) and The New Zealand National Eye Center (New Zealand). Corneal tomography scans were used to train an… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
32
0
2

Year Published

2021
2021
2023
2023

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 35 publications
(34 citation statements)
references
References 72 publications
0
32
0
2
Order By: Relevance
“…Feng et al proposed an end-to-end deep learning approach utilizing raw data obtained by the Pentacam system for keratoconus and subclinical keratoconus detection ( 9 ). Chen et al also showed that convolutional neural network provides excellent performance for keratoconus detection and grading classification using the axial map, anterior and posterior elevation map, and pachymetry maps obtained by the Scheimpflug camera ( 10 ). We assume that the use of a color-coded map has advantages over that of numeric values for machine learning, because it can bring us a larger amount of anterior corneal curvature information than these numeric values.…”
Section: Discussionmentioning
confidence: 99%
“…Feng et al proposed an end-to-end deep learning approach utilizing raw data obtained by the Pentacam system for keratoconus and subclinical keratoconus detection ( 9 ). Chen et al also showed that convolutional neural network provides excellent performance for keratoconus detection and grading classification using the axial map, anterior and posterior elevation map, and pachymetry maps obtained by the Scheimpflug camera ( 10 ). We assume that the use of a color-coded map has advantages over that of numeric values for machine learning, because it can bring us a larger amount of anterior corneal curvature information than these numeric values.…”
Section: Discussionmentioning
confidence: 99%
“…Another study trained an ensemble CNN on tomography measurements to differentiate between normal eyes and early, moderate, and advanced KCN with a staging accuracy of 98% [42]. Two studies used only topography images to detect and stage KCN [43 ▪ ,44]. Both studies had high overall accuracies (79% [43 ▪ ] 93% [44]), with better performance on color-coded maps than the raw topographic indices.…”
Section: Methodsmentioning
confidence: 99%
“…Prior research has proposed automated methods using quantitative indices [10] (like Percentage Probability of Keratoconus (PPK), Cone Location and Magnitude Index [11]), statistical methods [12,13], and traditional machine learning algorithms [9,14,15] (like logistic regression, K-nearest neighbour, clustering, decision trees, and random forests) to diagnose keratoconus from corneal topography data. However, with the advent of deep learning and its high performance on image classification tasks, deep learning based methods are being widely explored for keratoconus diagnosis and have been shown to be highly accurate [5,6,16]. Lavric and Valentin [16] propose to use a CNN-based classifier for keratoconus detection, however they train and test their model only on synthetic eye data.…”
Section: Related Workmentioning
confidence: 99%
“…They train and test their model on 354 samples, and achieve sensitivity and specificity of above 90%. Chen et al [5] use corneal tomography scans, with four type of heatmaps (axial, anterior elevation, posterior elevation, and pachymetry) per sample, and adopt a VGG16 model to learn a keratoconus detection classifier. They train their model on 1115 samples and test it on 279 samples, achieving a sensitivity of 98.5% and specificity of 90.0%.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation