2020
DOI: 10.1007/978-3-030-59710-8_39
|View full text |Cite
|
Sign up to set email alerts
|

Comparing to Learn: Surpassing ImageNet Pretraining on Radiographs by Comparing Image Representations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
67
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5
2
1
1

Relationship

3
6

Authors

Journals

citations
Cited by 74 publications
(67 citation statements)
references
References 9 publications
0
67
0
Order By: Relevance
“…The network is initialized using the pre-trained weights of ImageNet dataset [46] and eliminated the fully connected layers. Many studies reported the pre-training model using the ImageNet can improve the ability of medical image classification tasks [47][48].…”
Section: ) Deep Feature Extractionmentioning
confidence: 99%
“…The network is initialized using the pre-trained weights of ImageNet dataset [46] and eliminated the fully connected layers. Many studies reported the pre-training model using the ImageNet can improve the ability of medical image classification tasks [47][48].…”
Section: ) Deep Feature Extractionmentioning
confidence: 99%
“…Using EMA weights has been shown to help obtain more accurate predictions while stabilizing the training process [35,44]. Compared to CSD, we replace the siamese network with a student-teacher architecture to promote the detection performance.…”
Section: Investigation Of Ema Factormentioning
confidence: 99%
“…Consistency-based semi-supervised learning methods [14] mainly utilize self-supervision [46,5,44] and usually consist of two procedures. First, synthesize a pair of input images via some data augmentation strategies.…”
mentioning
confidence: 99%
“…Nonetheless, despite their effectiveness, unsupervised learning algorithms, including self-supervised learning, are frequently overshadowed by supervised learning algorithms. For example, as demonstrated in the most recent 3D pre-training studies [24], [42], even state-of-the-art self-supervised learning algorithms could not outperform ImageNet pre-trained models. This might be because, due to a lack of supervised signals, semantically discriminative representations are difficult to be mined from un/self-supervised learning algorithms.…”
Section: Introductionmentioning
confidence: 99%