2021
DOI: 10.1007/978-3-030-87589-3_44
|View full text |Cite
|
Sign up to set email alerts
|

Self-supervised Mean Teacher for Semi-supervised Chest X-Ray Classification

Abstract: The training of deep learning models generally requires a large amount of annotated data for effective convergence and generalisation. However, obtaining high-quality annotations is a laboursome and expensive process due to the need of expert radiologists for the labelling task. The study of semi-supervised learning in medical image analysis is then of crucial importance given that it is much less expensive to obtain unlabelled images than to acquire images labelled by expert radiologists.Essentially, semi-sup… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 23 publications
(5 citation statements)
references
References 37 publications
0
5
0
Order By: Relevance
“…Besides, we choose two existing semisupervised medical image classification methods [ 30 , 41 ] to compare with our ACCN model. S 2 MTS 2 [ 30 ] combines self-supervised mean-teacher pretraining with a semisupervised fine-tuning method to solve the multilabel chest X-ray classification; SRC-MT [ 41 ] proposes a sample relation data consistency paradigm to effectively extract unlabeled data by modeling the relationship information among different medical image samples. To compare the ACCN with them, we implement their public available code on the Messidor dataset with the same settings.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Besides, we choose two existing semisupervised medical image classification methods [ 30 , 41 ] to compare with our ACCN model. S 2 MTS 2 [ 30 ] combines self-supervised mean-teacher pretraining with a semisupervised fine-tuning method to solve the multilabel chest X-ray classification; SRC-MT [ 41 ] proposes a sample relation data consistency paradigm to effectively extract unlabeled data by modeling the relationship information among different medical image samples. To compare the ACCN with them, we implement their public available code on the Messidor dataset with the same settings.…”
Section: Methodsmentioning
confidence: 99%
“…Pang et al [ 29 ] developed a radionics model based on a semisupervised GAN method to perform data augmentation in breast ultrasound images. Liu et al [ 30 ] proposed a self-supervised mean teacher for chest X-ray classification that combines self-supervised mean-teacher pretraining with semisupervised fine-tuning. Bakalo et al [ 31 ] designed a deep learning architecture for multiclass classification and localization of abnormalities in medical imaging illustrated through experiments on mammograms.…”
Section: Related Workmentioning
confidence: 99%
“…To evaluate whether self-supervised CXR pretraining outperforms ImageNet pretraining, we selected seven SSL methods, three that only use CXR images (MedAug 16 , S2MTS2 17 , MoCo-CXR 18 ) and four that use both chest X-ray images and corresponding radiology reports (CXR-RePaiR-CLIP 19 , ConVIRT 20 ,REFERS 21 , GLoRIA 22 ). We give further details on these learning algorithms in the Methods section.…”
Section: Methodsmentioning
confidence: 99%
“…Teacher-student learning for medical applications -Li et al [156] designed a new SSL approach based on the teacher-student architecture to learn distinguishing representations from gastric X-ray images for a downstream task, gastritis detection. One of the student-teacher frameworks, Mean Teacher in [157], was integrated by Liu et al [158] in the pretraining process for semisupervised fine-tuning for thorax disease multilabel classification. Park et al [159] used information distillation between teacher and student frameworks and the vision transformer model for chest X-ray diagnosis, including tuberculosis, pneumothorax, and COVID-19.…”
Section: Instance-instance Contrastive Learning For Medical Image Ana...mentioning
confidence: 99%