2022
DOI: 10.48550/arxiv.2204.08808
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

SePiCo: Semantic-Guided Pixel Contrast for Domain Adaptive Semantic Segmentation

Abstract: Domain adaptive semantic segmentation attempts to make satisfactory dense predictions on an unlabeled target domain by utilizing the supervised model trained on a labeled source domain. One popular solution is self-training, which retrains the model with pseudo labels on target instances. Plenty of methods tend to alleviate noisy pseudo labels, however, they ignore intrinsic connections among cross-domain pixels with similar semantic concepts. In consequence, they would struggle to deal with the semantic varia… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
18
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(18 citation statements)
references
References 64 publications
(165 reference statements)
0
18
0
Order By: Relevance
“…DART (Shanis et al (2019)) and MT-UDA (Zhao et al (2021b)) combined self-training with adversarial learning in different ways, both receiving promising results. For imbalanced datasets, different denoising methods and sampling strategies have been proposed to improve the quality of pseudo-labels (Zhang et al (2021);Hoyer et al (2022); Xie et al (2022)). Similar to Chaitanya et al (2020), recent self-training approaches (Xie et al (2022); Zhang et al (2022)) incorporated CL, i.e.…”
Section: Unsupervised Domain Adaptationmentioning
confidence: 99%
See 2 more Smart Citations
“…DART (Shanis et al (2019)) and MT-UDA (Zhao et al (2021b)) combined self-training with adversarial learning in different ways, both receiving promising results. For imbalanced datasets, different denoising methods and sampling strategies have been proposed to improve the quality of pseudo-labels (Zhang et al (2021);Hoyer et al (2022); Xie et al (2022)). Similar to Chaitanya et al (2020), recent self-training approaches (Xie et al (2022); Zhang et al (2022)) incorporated CL, i.e.…”
Section: Unsupervised Domain Adaptationmentioning
confidence: 99%
“…For imbalanced datasets, different denoising methods and sampling strategies have been proposed to improve the quality of pseudo-labels (Zhang et al (2021);Hoyer et al (2022); Xie et al (2022)). Similar to Chaitanya et al (2020), recent self-training approaches (Xie et al (2022); Zhang et al (2022)) incorporated CL, i.e. unsupervised contrastive domain adaptation, to align cross-domain features by sampling or merging contrastive feature embeddings across categories.…”
Section: Unsupervised Domain Adaptationmentioning
confidence: 99%
See 1 more Smart Citation
“…Nowadays, many research works have proved that domain adaptation is one of the powerful means to address the above issues [23,37,44,60,63]. Among them, unsupervised domain adaptation (UDA) [26,28,33,41,69,77] aims to solve this problem by leveraging the knowledge of labelrich data (source data) and transferring it to unlabeled data (target data) [52]. While it can avoid the intensive workload of manual annotation, the performance still lags far behind fully supervised models [56].…”
Section: Ssda (V2)mentioning
confidence: 99%
“…Domain adaptation has been thoroughly studied to improve model generalization to unseen domains, e.g., adapting to the real world from synthetic data collections [81], [82]. Two predominant categories of unsupervised domain adaptation fall either in self-training [83], [84], [85], [86], [87], [88] or adversarial learning [89], [90], [91]. Self-training methods usually generate pseudo-labels to gradually adapt through iterative improvement [92], whereas adversarial solutions build on the idea of GANs [93] to conduct image translation [89], [94], or enforce alignment in layout matching [95], [96] and feature agreement [3], [16].…”
Section: Unsupervised Domain Adaptationmentioning
confidence: 99%