2019 IEEE/CVF International Conference on Computer Vision (ICCV) 2019
DOI: 10.1109/iccv.2019.00107
|View full text |Cite
|
Sign up to set email alerts
|

SSF-DAN: Separated Semantic Feature Based Domain Adaptation Network for Semantic Segmentation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
126
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 169 publications
(126 citation statements)
references
References 24 publications
0
126
0
Order By: Relevance
“…Table 1 and 2 summarize the semantic segmentation results of our model and compare with the other UDA methods [9,8,10,18,17]. To have a fair comparison, all the segmentation networks shown in the tables are developed with ResNet-101.…”
Section: Results and Comparisonsmentioning
confidence: 99%
See 1 more Smart Citation
“…Table 1 and 2 summarize the semantic segmentation results of our model and compare with the other UDA methods [9,8,10,18,17]. To have a fair comparison, all the segmentation networks shown in the tables are developed with ResNet-101.…”
Section: Results and Comparisonsmentioning
confidence: 99%
“…The adversarial domain alignment has been conducted in image level These two authors contributed equally. [ 5,6], feature level [7,8], or output level [9,10,11]. Other techniques, such as pseudo-label re-training [12,13], curriculum learning [14,15], and source-data selection [16], have also been exploited to reduce the cross-domain gap.…”
Section: Introductionmentioning
confidence: 99%
“…Specifically, despite its simplicity, CCM outperforms previous state-of-the-art adversarial-training (denote as "AT") based method "SSF-DAN" [13] by +4.5% and +2.9% on GTA5 → Cityscapes and SYNTHIA → Cityscapes, respectively. Compared with methods established on self-training, CCM achieves comparable or even better results.…”
Section: Comparison With the State-of-the-artsmentioning
confidence: 94%
“…Self-training has been exploited in various tasks such as semi-supervised learning [25,21], domain adaptation [38,58], and noisy label learning [40,35]. [41,44,34,47,24,13] adopted adversarial training at feature level to learn domain-invariant features to reduce the discrepancy across domains. [18,8,27] applied adversarial training at the image level to make features invariant to illumination, color and other style factors.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation