2021
DOI: 10.1088/1742-6596/1955/1/012037
|View full text |Cite
|
Sign up to set email alerts
|

A crowdsourcing framework for retinal image semantic annotation and report documentation with deep learning enhancement

Abstract: To propose and implement a crowdsourcing framework for retinal image annotations to improve the annotation efficiency. In this study, open-source Bluelight was taken as backbone of the front end for online manual retinal image annotation for image semantic annotation and report documents, and based on that intelligent annotation and classification with deep learning (DL) was supplemented. For DL modules, we trained Mask-RCNN model to explicitly label the area of optic disc and macula. Furthermore, we trained I… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 16 publications
0
1
0
Order By: Relevance
“…characterized by low precision of isolation). To increase the efficiency of this procedure one can use semi-automatic semantic annotation tools 1,2,3,4 where some simplified and manually tuned approaches are already implemented to isolate specific objects. These semi-automatic tools work in the human-in-theloop (HITL) paradigm 5 , where each automatically annotated image is assessed by a human and if the annotation fails, the operator can tune the parameters of the annotation tool or correct the output manually.…”
Section: Introductionmentioning
confidence: 99%
“…characterized by low precision of isolation). To increase the efficiency of this procedure one can use semi-automatic semantic annotation tools 1,2,3,4 where some simplified and manually tuned approaches are already implemented to isolate specific objects. These semi-automatic tools work in the human-in-theloop (HITL) paradigm 5 , where each automatically annotated image is assessed by a human and if the annotation fails, the operator can tune the parameters of the annotation tool or correct the output manually.…”
Section: Introductionmentioning
confidence: 99%