2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.01141
|View full text |Cite
|
Sign up to set email alerts
|

RobustNet: Improving Domain Generalization in Urban-Scene Segmentation via Instance Selective Whitening

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
139
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 179 publications
(139 citation statements)
references
References 56 publications
0
139
0
Order By: Relevance
“…Instead of enforcing the entire features to be invariant, disentangled feature learning approaches (Chattopadhyay et al (2020); Piratla et al (2020)) decouple the features into domain-specific and domain-invariant parts and learn their representations simultaneously. In addition, normalizationbased methods (Pan et al (2018); Choi et al (2021)) can also be used to remove the style information to obtain invariant representations.…”
Section: Invariant Representation Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…Instead of enforcing the entire features to be invariant, disentangled feature learning approaches (Chattopadhyay et al (2020); Piratla et al (2020)) decouple the features into domain-specific and domain-invariant parts and learn their representations simultaneously. In addition, normalizationbased methods (Pan et al (2018); Choi et al (2021)) can also be used to remove the style information to obtain invariant representations.…”
Section: Invariant Representation Learningmentioning
confidence: 99%
“…This assumption, however, does not hold in many real-world applications. For instance, when employing segmentation models trained on sunny days for rainy and foggy environments (Choi et al, 2021), or recognizing art paintings with models that trained on photographs (Li et al, 2017), inevitable performance drop can often be observed in such out-of-distribution deployment scenarios. Therefore, the problem of domain generalization, aiming to improve the robustness of the network on various unseen testing domains, becomes quite important.…”
Section: Introductionmentioning
confidence: 99%
“…The aim of DG methods [25], [26], [27], [28] is to improve DNN performance in an unknown target domain using data from (several different) source domains. For semantic segmentation, several approaches have been proposed [25], [14], [29], e.g., Yue et al [14] mix the style of synthetic images with real images, using auxiliary source domain datasets, thereby learning more domain-invariant features. While our CBNA method for continual source-free UDA is applied after pre-training and using only the pre-trained model and target domain data, DG is applied during pre-training on source data (usually with labels) and without target data.…”
Section: A Domain Generalization (Dg)mentioning
confidence: 99%
“…To reduce the large gap between synthetic data and the real-world data, DG methods usually augment the synthetic samples [38,12] with the styles of ImageNet [7] or conditionally align the outputs [2,3] between the segmentation model and the ImageNet pre-trained model. On the other hand, some works are proposed to learn domain-invariant features by removing domain-specific information [5,27] or feature augmentation [32]. Recently, Liu et al [22] Source-Free Domain Adaptation.…”
Section: Related Workmentioning
confidence: 99%