2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.00404
|View full text |Cite
|
Sign up to set email alerts
|

Coarse-to-Fine Domain Adaptive Semantic Segmentation with Photometric Alignment and Category-Center Regularization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
52
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 52 publications
(59 citation statements)
references
References 23 publications
0
52
0
Order By: Relevance
“…To be specific, GTA5 and SYN-THIA share 19 and 16 common categories with Cityscapes, repectively. On SYNTHIA→Cityscapes, following [32], we consider two different testing protocols: applying all 16 common categories or just a subset consisting of 13 categories for evaluations. Note that we train the model on the Metrics.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…To be specific, GTA5 and SYN-THIA share 19 and 16 common categories with Cityscapes, repectively. On SYNTHIA→Cityscapes, following [32], we consider two different testing protocols: applying all 16 common categories or just a subset consisting of 13 categories for evaluations. Note that we train the model on the Metrics.…”
Section: Methodsmentioning
confidence: 99%
“…Given the past success of Convolutional Neural Networks (CNNs) on many computer vision (CV) tasks [18,29,36], plenty of works [1,6,15,20,21,31,32,43,47,51,54,56] resort to CNNs as the semantic segmentation function f θ . Although these conventional CNN-based backbones obtained decent performance on various benchmarks, recent works (e.g.…”
Section: Introductionmentioning
confidence: 99%
“…UDA approaches [3,6,12,17,20,21,23,30,33,41,44] aim at learning domain invariant representations by aligning the distributions of the two domains at feature/output level or at image level. Based on the observation that the source and the target domain share a similar semantic layout, [30,32] rely on adversarial training to align the raw output and entropy distributions respectively.…”
Section: Related Workmentioning
confidence: 99%
“…[6,12,17,38] rely on Cycle-GAN [43] to translate source domain images to the style of the target domain. Two recent works [20,39] bypass the need for training an image translation network by relying on simple Fourier transform and global photometric alignment respectively.…”
Section: Related Workmentioning
confidence: 99%
“…UDA aims at transferring the knowledge from a labeled source domain to an unlabeled target domain. Existing UDA approaches can be roughly divided into two categories, i.e., aligning domain distributions through adversarial learning [34,36,24,8] and self-training on the target domain [43,39,25,41]. DG focuses on training a robust model with synthetic data, which can generalize well on unseen real-world target data.…”
Section: Related Workmentioning
confidence: 99%