2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2022
DOI: 10.1109/cvpr52688.2022.02001
|View full text |Cite
|
Sign up to set email alerts
|

BoostMIS: Boosting Medical Image Semi-supervised Learning with Adaptive Pseudo Labeling and Informative Active Annotation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
18
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 54 publications
(29 citation statements)
references
References 38 publications
0
18
0
Order By: Relevance
“…One adds an unsupervised loss term (i.e., a regularizer) into the loss function so the training model is expected to learn the labeled and unlabeled data at the same time [28]- [31]. Another labels the unlabeled data with pseudo labels and the pseudo-labeled data are then used in training with a supervised loss [32]- [35].…”
Section: Semi-supervised Learning (Ssl)mentioning
confidence: 99%
“…One adds an unsupervised loss term (i.e., a regularizer) into the loss function so the training model is expected to learn the labeled and unlabeled data at the same time [28]- [31]. Another labels the unlabeled data with pseudo labels and the pseudo-labeled data are then used in training with a supervised loss [32]- [35].…”
Section: Semi-supervised Learning (Ssl)mentioning
confidence: 99%
“…Consistency regularization. It has been extensively used in SSL [3,20,40] to hold similar output distribution when input was perturbed. Since data augmentation [13,36,39] shows huge superiority in supervised learning, recent works have begun to give more attention to data augmentation perturbation and have achieved significant successes.…”
Section: Related Workmentioning
confidence: 99%
“…Active learning methods select the data samples that can benefit the model most in each iteration and request humans to label them to push the usage of human efforts to the minimum. To determine which data sample to be labeled by humans, some approaches use probability models and prioritize the data samples with high prediction inconsistency [12,14,26,52], some depend on the vectorized representation from deep learning models [16,34,53], and some calculate the low-rank matrix representation for both labeled and unlabeled data to calculate the informativeness [45].…”
Section: Active Learningmentioning
confidence: 99%