2018
DOI: 10.1007/978-3-030-01228-1_32
|View full text |Cite
|
Sign up to set email alerts
|

DCAN: Dual Channel-Wise Alignment Networks for Unsupervised Scene Adaptation

Abstract: Harvesting dense pixel-level annotations to train deep neural networks for semantic segmentation is extremely expensive and unwieldy at scale. While learning from synthetic data where labels are readily available sounds promising, performance degrades significantly when testing on novel realistic data due to domain discrepancies. We present Dual Channel-wise Alignment Networks (DCAN), a simple yet effective approach to reduce domain shift at both pixel-level and feature-level. Exploring statistics in each chan… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
189
2

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 237 publications
(191 citation statements)
references
References 52 publications
(99 reference statements)
0
189
2
Order By: Relevance
“…Recently, domain adaptation for semantic segmentation has made good progress by separating it into two sequential steps. It firstly translates images from the source domain to the target domain with an image-to-image translation model (e.g., CycleGAN [38]) and then add a discriminator on top of the features of the segmentation model to further decrease the domain gap [12,36]. When the domain gap is reduced by the former step, the latter one is easy to learn and can further decrease the domain shift.…”
Section: Introductionmentioning
confidence: 99%
See 3 more Smart Citations
“…Recently, domain adaptation for semantic segmentation has made good progress by separating it into two sequential steps. It firstly translates images from the source domain to the target domain with an image-to-image translation model (e.g., CycleGAN [38]) and then add a discriminator on top of the features of the segmentation model to further decrease the domain gap [12,36]. When the domain gap is reduced by the former step, the latter one is easy to learn and can further decrease the domain shift.…”
Section: Introductionmentioning
confidence: 99%
“…In this paper, we propose a new bidirectional learning framework for domain adaptation of image semantic segmentation. The system involves two separated modules: image-to-image translation model and segmentation adaptation model similar to [12,36], but the learning process involves two directions (i.e., "translation-to-segmentation" and "segmentation-to-translation"). The whole system forms a closed-loop learning.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…We report experimental results of the proposed method on two adaptation experiments in Table 1. We compare our proposed method with Curriculum DA [51], CyCADA [14], MCD [39], LSD-seg [41], AdaptSegNet [44], ROAD [5], Conservative Loss [55], DCAN [47], and CBST [57]. In Table 1, Self-Ensembling (SE) represents the segmentation performance of the network trained by source and target through the self-ensembling, without our data augmentation method.…”
Section: Resultsmentioning
confidence: 99%