2021 IEEE/CVF International Conference on Computer Vision (ICCV) 2021
DOI: 10.1109/iccv48922.2021.00840
|View full text |Cite
|
Sign up to set email alerts
|

Domain Adaptive Semantic Segmentation with Self-Supervised Depth Estimation

Abstract: Domain adaptation for semantic segmentation aims to improve the model performance in the presence of a distribution shift between source and target domain. Leveraging the supervision from auxiliary tasks (such as depth estimation) has the potential to heal this shift because many visual tasks are closely related to each other. However, such a supervision is not always available. In this work, we leverage the guidance from self-supervised depth estimation, which is available on both domains, to bridge the domai… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
37
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5
4
1

Relationship

2
8

Authors

Journals

citations
Cited by 95 publications
(56 citation statements)
references
References 44 publications
0
37
0
Order By: Relevance
“…Zhang et al [25] proposes to refine pseudo-labels by making use of prototypes. Wang et al [26] find that explicitly learning the relationship between the main task and selfsupervised auxiliary task can help improve DA performance. Mei et al [27] find it beneficial for DA to integrate adversarial alignment with self-training.…”
Section: B Domain Adaptationmentioning
confidence: 99%
“…Zhang et al [25] proposes to refine pseudo-labels by making use of prototypes. Wang et al [26] find that explicitly learning the relationship between the main task and selfsupervised auxiliary task can help improve DA performance. Mei et al [27] find it beneficial for DA to integrate adversarial alignment with self-training.…”
Section: B Domain Adaptationmentioning
confidence: 99%
“…In addition, many works opt for a stage-wise training mechanism to avoid training error amplification in a single-stage model, which heavily relies on a warm-up stage to increase the reliability of the generated pseudo labels. Hereafter, several methods combine adversarial training and self-training [37], [38] or train with auxiliary tasks [39], [40] to learn discriminative representations from unlabeled target data. Contrastive learning is a relevant topic, which learns proper visual representations by comparing different unlabeled data [41], [42], [43], [44].…”
Section: Memory Bankmentioning
confidence: 99%
“…Guizilini et al (2021) utilize multi-task learning of semantic segmentation and SDE to learn a more domain-invariant representation. Instead of applying the view synthesis loss from SDE directly, Wang et al (2021) use depth pseudo-labels from an SDE teacher network to learn depth estimation and semantic segmentation in a multi-tasking framework. To better transfer knowledge between both domains and tasks, the correlation of depth and semantic segmentation features is explicitly transferred from the source to the target domain and the depth adaptation difficulty is transferred to semantic segmentation to weigh the trust in the semantic segmentation pseudo-labels.…”
Section: Auxiliary Depth Estimation For Domain Adaptationmentioning
confidence: 99%