2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2022
DOI: 10.1109/cvprw56347.2022.00454
|View full text |Cite
|
Sign up to set email alerts
|

SaR: Self-adaptive Refinement on Pseudo Labels for Multiclass-Imbalanced Semi-supervised Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 8 publications
(7 citation statements)
references
References 24 publications
0
7
0
Order By: Relevance
“…For instance, seminal works (Cui et al, 2019;Ren et al, 2020) reweight the loss functions according to the sampling frequency of each class. Recent literature (Lai et al, 2022) enhances the robustness of SSL to long-tailed class imbalanced problems by designing weights in the unsupervised loss based on estimating the learning difficulty of each class. In contrast, several studies (Cao et al, 2019;Menon et al, 2020;Tan et al, 2020) have attempted to adjust the loss margins of each class.…”
Section: Loss Function For Long-tailed Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…For instance, seminal works (Cui et al, 2019;Ren et al, 2020) reweight the loss functions according to the sampling frequency of each class. Recent literature (Lai et al, 2022) enhances the robustness of SSL to long-tailed class imbalanced problems by designing weights in the unsupervised loss based on estimating the learning difficulty of each class. In contrast, several studies (Cao et al, 2019;Menon et al, 2020;Tan et al, 2020) have attempted to adjust the loss margins of each class.…”
Section: Loss Function For Long-tailed Learningmentioning
confidence: 99%
“…For the unsupervised loss function CEL, the weight parameter (l) increases linearly per epoch according to l = l u  epoch epoch max , and the confidence threshold t is set to 0.95. As in previous works (Sohn et al, 2020;Lai et al, 2022), we employ an exponential moving average of model parameters to generate the final performance. We keep other hyper-parameters the same as the ImageNet experiments in FixMatch, except for those mentioned above.…”
Section: Implementation Detailsmentioning
confidence: 99%
“…generate unbiased and accurate pseudo-labels, such as resampling (Lee, Shin, and Kim 2021;Guo and Li 2022), re-weighting (Lai et al 2022), transfer learning (Fan et al 2022), and logit adjustment (Wang et al 2022a). However, these works usually assume a similar class distribution between labeled and unlabeled set and show inferior performances when such assumption is violated.…”
Section: Long-tailed Learningmentioning
confidence: 99%
“…Baselines We compare with several LTSSL algorithms published in top-conferences/journals in the past few years. These baseline algorithms include DARP (Kim et al 2020), CReST (Wei et al 2021) and its variant CReST+ (Wei et al 2021), ABC (Lee, Shin, and Kim 2021), DASO (Oh, Kim, and Kweon 2022), CoSSL (Fan et al 2022), SAW (Lai et al 2022), Adsh (Guo and Li 2022), DePL (Wang et al 2022a), RDA (Duan et al 2022), and ACR (Wei and Gan 2023). For a fair comparison, we test these baselines and our CPE algorithm on the widely-used codebase USB 1 .…”
Section: Experimental Settingmentioning
confidence: 99%
“…In recent semi-supervised learning methods, re-weighting is gradually gaining attention. Wei et al (2021) utilize the unlabeled data with pseudo labels according to an estimated class distribution, Kim et al (2020) develop a framework to refine class distribution, Lai et al (2022) use the effective number defined by Cui et al (2019) to produce adaptive weights.…”
Section: Related Workmentioning
confidence: 99%