2021
DOI: 10.48550/arxiv.2107.11279
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Re-distributing Biased Pseudo Labels for Semi-supervised Semantic Segmentation: A Baseline Investigation

Abstract: While self-training has advanced semi-supervised semantic segmentation, it severely suffers from the longtailed class distribution on real-world semantic segmentation datasets that make the pseudo-labeled data bias toward majority classes. In this paper, we present a simple and yet effective Distribution Alignment and Random Sampling (DARS) method to produce unbiased pseudo labels that match the true class distribution estimated from the labeled data. Besides, we also contribute a progressive data augmentation… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(4 citation statements)
references
References 62 publications
0
4
0
Order By: Relevance
“…Table 3 shows that our approach outperforms the SOTA methods for each architec- ture and backbone settings. For PSPNet, our approach outperforms DARS [18] by 0.7% mIoU and CCT [34] by 5.19%. In the experiments, our approach outperforms other SOTA approaches by a large gap.…”
Section: Results On Official Labelled Set Of Pascal Voc 2012mentioning
confidence: 94%
See 3 more Smart Citations
“…Table 3 shows that our approach outperforms the SOTA methods for each architec- ture and backbone settings. For PSPNet, our approach outperforms DARS [18] by 0.7% mIoU and CCT [34] by 5.19%. In the experiments, our approach outperforms other SOTA approaches by a large gap.…”
Section: Results On Official Labelled Set Of Pascal Voc 2012mentioning
confidence: 94%
“…We load the ImageNet pre-trained checkpoint, and the segmentation heads are initialized randomly. Following previous papers [9,18,34], we utilise the following polynomial learning-rate decay: (1 − iter max iter ) 0.9 . We also test our method on PSPNet [22,34] to show the generalization of our approach.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations