2020
DOI: 10.1007/978-3-030-58523-5_28
|View full text |Cite
|
Sign up to set email alerts
|

FeatMatch: Feature-Based Augmentation for Semi-supervised Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
57
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 78 publications
(63 citation statements)
references
References 18 publications
0
57
0
Order By: Relevance
“…On PACS, we compare our method with the following baselines and the state-of-the-art semi-supervised learning approaches (i.e., FixMatch [25] and FeatMatch [24]) on the classification accuracy of the target domain using ResNet-18 and ResNet-50. And we also report our performance and baseline results on OfficeHome and miniDomainNet datasets using ResNet-18.…”
Section: B Comparison With Other Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…On PACS, we compare our method with the following baselines and the state-of-the-art semi-supervised learning approaches (i.e., FixMatch [25] and FeatMatch [24]) on the classification accuracy of the target domain using ResNet-18 and ResNet-50. And we also report our performance and baseline results on OfficeHome and miniDomainNet datasets using ResNet-18.…”
Section: B Comparison With Other Methodsmentioning
confidence: 99%
“…ReMix-Match [23] further improves MixMatch [22]. FeatMatch [24] applies learned feature-based augmentation to consistency loss. FixMatch [25] inherits UDA and ReMixMatch, and combines pseudo-labeling and consistency regularization, finally obtains good performance on SSL benchmarks.…”
Section: B Semi-supervised Learningmentioning
confidence: 99%
“…Semi-Supervised Learning (SSL) trains models by leveraging both labeled and unlabeled data, and the general idea can be divided into two groups by recent works [43,52,57], i.e., consistency regularization [5,6,13,20,23,30,36,49] and pseudo-labeling [1,3,15,18,42,44]. The first encourages consistent predictions on different augmented versions of the same image, such as virtual adversarial perturbations [30], image mix-up [6,55], grid-masking [8], and even an ensemble of all popular augmentations [5,10,36].…”
Section: Related Workmentioning
confidence: 99%
“…Semi-supervised learning. Semi-supervised learning for image classification aims in leveraging unlabeled data and improving model performance by making best use of limited labeled data [5,11,30,52]. Three most explored directions are consistency regularization [32,41], entropy minimization [21], and pseudo-labeling [46].…”
Section: Related Workmentioning
confidence: 99%