2021
DOI: 10.48550/arxiv.2111.12903
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Perturbed and Strict Mean Teachers for Semi-supervised Semantic Segmentation

Abstract: Consistency learning using input image, feature, or network perturbations has shown remarkable results in semisupervised semantic segmentation, but this approach can be seriously affected by inaccurate predictions of unlabelled training images. There are two consequences of these inaccurate predictions: 1) the training based on the "strict" cross-entropy (CE) loss can easily overfit prediction mistakes, leading to confirmation bias; and 2) the perturbations applied to these inaccurate predictions will use pote… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 40 publications
(96 reference statements)
0
2
0
Order By: Relevance
“…In the context of semantic segmentation, CutMix-Seg [29] and PseudoSeg [44] apply perturbations on inputs and hope the decision boundary lies in the low-density region. CPS [13] and GCT [12] enforce consistency between two perturbed networks [45]. These perturbations [46,47,48], however, are either inapplicable or only yield sub-par results in 3D.…”
Section: Ssl In 2dmentioning
confidence: 99%
“…In the context of semantic segmentation, CutMix-Seg [29] and PseudoSeg [44] apply perturbations on inputs and hope the decision boundary lies in the low-density region. CPS [13] and GCT [12] enforce consistency between two perturbed networks [45]. These perturbations [46,47,48], however, are either inapplicable or only yield sub-par results in 3D.…”
Section: Ssl In 2dmentioning
confidence: 99%
“…The different views of consistency learning methods can be obtained via data augmentation [3] or from the outputs of differently initialized networks [2,4,5]. Mean teacher (MT) [4,[6][7][8][9] combines these two perturbations and averages the network parameters during training, yielding reliable pseudo-labels for the unlabelled data. However, the domain-specific transfer [3] of the teacher-student scheme can cause both networks to converge to a quite similar local minimum, reducing the network perturbation effectiveness.…”
Section: Introductionmentioning
confidence: 99%