2021
DOI: 10.48550/arxiv.2106.05095
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

ST++: Make Self-training Work Better for Semi-supervised Semantic Segmentation

Abstract: In this paper, we investigate if we could make the self-training -a simple but popular framework -work better for semi-supervised segmentation. Since the core issue in semi-supervised setting lies in effective and efficient utilization of unlabeled data, we notice that increasing the diversity and hardness of unlabeled data is crucial to performance improvement. Being aware of this fact, we propose to adopt the most plain self-training scheme coupled with appropriate strong data augmentations on unlabeled data… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
9
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(9 citation statements)
references
References 41 publications
0
9
0
Order By: Relevance
“…Existing supervised approaches rely on large-scale annotated data, which can be too costly to acquire in practice. To alleviate this problem, many attempts [1,4,9,15,21,33,43,48] have been made towards semi-supervised semantic segmentation, which learns a model with only a few labeled samples and numerous unlabeled ones. Under such a setting, how to adequately leverage the unlabeled data becomes critical.…”
Section: Iou Reliable Unreliablementioning
confidence: 99%
See 3 more Smart Citations
“…Existing supervised approaches rely on large-scale annotated data, which can be too costly to acquire in practice. To alleviate this problem, many attempts [1,4,9,15,21,33,43,48] have been made towards semi-supervised semantic segmentation, which learns a model with only a few labeled samples and numerous unlabeled ones. Under such a setting, how to adequately leverage the unlabeled data becomes critical.…”
Section: Iou Reliable Unreliablementioning
confidence: 99%
“…Concretely, given an unlabeled image, prior arts [27,41] borrow predictions from the model trained on labeled data, and use the pixel-wise prediction as the "ground-truth" to in turn boost the supervised model. To mitigate the problem of confirmation bias [2], where the model may suffer from incorrect pseudo-labels, existing approaches propose to filter the predictions with their confidence scores [42,43,50,51]. In other words, only the highly confident predictions are used as the pseudo-labels, while the ambiguous ones are discarded.…”
Section: Iou Reliable Unreliablementioning
confidence: 99%
See 2 more Smart Citations
“…Even though L2 loss is known to be robust, which is advantageous when dealing with the noisy predictions produced by consistencybased methods, it is also known to have poor converge and to possibly lead to vanishing gradients. Given the reliability of the segmentation predictions produced by our extended MT model, we instead use the more effective cross entropy (CE) loss, constrained to be computed at regions of high-confidence segmentation results, represented by c(ω) in (3), following the strategy applied in self-training approaches [18,39,41].…”
Section: Training With Multiple Perturbations and A Strict Confidence...mentioning
confidence: 99%