2019
DOI: 10.48550/arxiv.1901.08707
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Surrogate Supervision for Medical Image Analysis: Effective Deep Learning From Limited Quantities of Labeled Data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0

Year Published

2019
2019
2020
2020

Publication Types

Select...
3
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(6 citation statements)
references
References 12 publications
0
6
0
Order By: Relevance
“…Specifically, self-supervised pre-training consists of assigning surrogate or proxy labels to the unlabeled data and then training a randomly initialized network using the resulting surrogate supervision signal. The advantage of model pre-training using unlabeled medical data is that the learned knowledge is related to the target medical task; and thus, can be more effective than transfer learning from a foregin domain (e.g., Tajbakhsh et al (2019) and Ross et al (2018)).…”
Section: Self-supervised Pre-trainingmentioning
confidence: 99%
See 1 more Smart Citation
“…Specifically, self-supervised pre-training consists of assigning surrogate or proxy labels to the unlabeled data and then training a randomly initialized network using the resulting surrogate supervision signal. The advantage of model pre-training using unlabeled medical data is that the learned knowledge is related to the target medical task; and thus, can be more effective than transfer learning from a foregin domain (e.g., Tajbakhsh et al (2019) and Ross et al (2018)).…”
Section: Self-supervised Pre-trainingmentioning
confidence: 99%
“…The pre-trained model is then fine-tuned for the task of body part recognition in CT and MR images. Tajbakhsh et al (2019) use prediction of image orientation as the surrogate task where the input image is rotated or flipped and the network is trained to predict such a transformation. The authors show that this surrogate task is highly effective for diabetic retinopathy classification in fundus images and lung lobe segmentation in chest CT scans.…”
Section: Self-supervised Pre-trainingmentioning
confidence: 99%
“…As in this study, [36] looked at self-supervised learning for 3D medical image segmentation. They used rotation prediction to pretrain a dense V-Net for lung lobe segmentation, a task which requires strong supervision and large amounts of labeled data.…”
Section: Discussionmentioning
confidence: 99%
“…Another possibility is to modify the data appearance and try to predict the transformation, e.g. image rotation [36]. In multi-task learning (MTL), additional unsupervised task(s) are performed, either sequentially [8,1] or simultaneously, in order to improve the supervised target task of segmentation.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation