2018
DOI: 10.1109/access.2017.2776126
|View full text |Cite
|
Sign up to set email alerts
|

Automated Quality Assessment of Fundus Images via Analysis of Illumination, Naturalness and Structure

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
38
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 34 publications
(40 citation statements)
references
References 30 publications
2
38
0
Order By: Relevance
“…employed features based on the human visual system, with a support vector machine (SVM) or a decision tree to identify high-quality images [16]. A fundus image quality classifier that analyzes illumination, naturalness, and structure was also provided to assess quality [14]. Recently, deep learning techniques that integrate multi-level representations have been shown to obtain significant performances in a wide variety of medical imaging tasks.…”
Section: Introductionmentioning
confidence: 99%
“…employed features based on the human visual system, with a support vector machine (SVM) or a decision tree to identify high-quality images [16]. A fundus image quality classifier that analyzes illumination, naturalness, and structure was also provided to assess quality [14]. Recently, deep learning techniques that integrate multi-level representations have been shown to obtain significant performances in a wide variety of medical imaging tasks.…”
Section: Introductionmentioning
confidence: 99%
“…Several automated techniques for evaluating fundus image quality have been published. Shao et al 31 developed a fundus image quality classifier by the analysis of illumination, naturalness, and structure using three secondary indices. Their model achieved a sensitivity of 94.69% and a specificity of 92.29% in 80 images.…”
Section: Discussionmentioning
confidence: 99%
“…In addition, those AI diagnostic systems exhibited better performances when dealing with good-quality images than with poor-quality images for both external datasets (ZOC and XOH), indicating that the AI diagnostic systems developed based on good-quality images cannot be readily applied to poor-quality images. However, poor-quality images are inevitable in clinical practice due to various factors, such as a dirty camera lens, head/eye movement, eyelid obstruction, operator error, patient noncompliance and obscured optical media 31 , 36 . Therefore, we propose that the systems developed using good-quality images for detecting retinal diseases in real-world settings (e.g., LDRB, retinal detachment, and retinitis pigmentosa) 18 28 need to be integrated with the DLIFS to initially discern and filter out poor-quality images, to ensure their optimum performance.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Two proprietary data-sets, namely: LOCAL1 and LOCAL2, and two public data-sets (DRIMDB and DRIVE [53,55]) have been used for the 536 fundus images for the experiment purpose. F. Shao, Y. Yang, Q. Jiang, G. Jiang, and Y. S. Ho [101] presented a retinal IQA method based on the idea similar to [100]. All the steps involved in [100] and the proposed method are the same except that the features are used as quality parameters.…”
Section: Feature Extraction Based On Generic Image Statisticsmentioning
confidence: 99%