2021
DOI: 10.1016/j.media.2021.101997
|View full text |Cite
|
Sign up to set email alerts
|

Active, continual fine tuning of convolutional neural networks for reducing annotation efforts

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
16
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
3
1
1

Relationship

1
8

Authors

Journals

citations
Cited by 32 publications
(16 citation statements)
references
References 40 publications
0
16
0
Order By: Relevance
“…Notably, at small shot numbers, different shot data can have very large impacts on the fine-tuning process, whereas we observed that as the shot number increases variance becomes substantially smaller. In the future, active learning (38) on cell instance segmentation promises to refine shot data selection for finetuning.…”
Section: Discussionmentioning
confidence: 99%
“…Notably, at small shot numbers, different shot data can have very large impacts on the fine-tuning process, whereas we observed that as the shot number increases variance becomes substantially smaller. In the future, active learning (38) on cell instance segmentation promises to refine shot data selection for finetuning.…”
Section: Discussionmentioning
confidence: 99%
“…In addition to model transfer, TL can also be used to reduce the difficulty and costs of data annotation. Zhou et al [51] uses the technology of active learning and TL to implement medical data labeling, which reduces labeling costs by at least half compared with the state-of-the-art methods. In medical US image analysis, TL is frequently used to pre-train neural networks [3], [41], [52].…”
Section: Medical Ultrasound Image Preprocessingmentioning
confidence: 99%
“…Minimizing the number of annotated samples requires that the labeled samples be distinct from one another. Therefore, uncertainty and diversity are two natural metrics for informativeness and representativeness [1], [2], upon which the two articles featured in this Special Issue present two methods.…”
Section: A Active Learningmentioning
confidence: 99%
“…A NNOTATION-EFFICIENT deep learning refers to methods and practices that yield high-performance deep learning models without the use of massive carefully labeled training datasets. This paradigm has recently attracted attention from the medical imaging research community because (1) it is difficult to collect large, representative medical imaging datasets given the diversity of imaging protocols, imaging devices, and patient populations, (2) it is expensive to acquire accurate annotations from medical experts even for moderately sized medical imaging datasets, and (3) it is infeasible to adapt data-hungry deep learning models to detect and diagnose rare diseases whose low prevalence hinders data collection.…”
Section: Guest Editorial Annotation-efficient Deep Learning: the Holy Grail Of Medical Imaging I Introductionmentioning
confidence: 99%