2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.00858
|View full text |Cite
|
Sign up to set email alerts
|

Reducing Domain Gap by Reducing Style Bias

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
104
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 197 publications
(104 citation statements)
references
References 24 publications
0
104
0
Order By: Relevance
“…Compared with the state-of-the-art methods, MIRO achieves the best performances in all benchmarks, except PACS. Especially, MIRO remarkably outperforms previous state-of-the-arts: +1.3pp in OfficeHome (mDSDI [59]; 69.2% → 70.5%) and +1.8pp in TerraIncognita (SagNet [29]; 48.6% → 50.4%). Considering the experiment setup with 5 datasets and 22 target domains, the experiment results demonstrate the effectiveness of MIRO to the diverse visual data types.…”
Section: Resultsmentioning
confidence: 89%
See 1 more Smart Citation
“…Compared with the state-of-the-art methods, MIRO achieves the best performances in all benchmarks, except PACS. Especially, MIRO remarkably outperforms previous state-of-the-arts: +1.3pp in OfficeHome (mDSDI [59]; 69.2% → 70.5%) and +1.8pp in TerraIncognita (SagNet [29]; 48.6% → 50.4%). Considering the experiment setup with 5 datasets and 22 target domains, the experiment results demonstrate the effectiveness of MIRO to the diverse visual data types.…”
Section: Resultsmentioning
confidence: 89%
“…The trained model on multiple source domains is evaluated on an unseen domain (e.g., art painting) to measure the robustness against distribution shifts. The existing DG approaches have tried to learn invariant features across multiple domains by minimizing feature divergences between the source domains [10][11][12][13][14][15][16], normalizing domain-specific gradients based on meta-learning [17][18][19][20][21], robust optimization [22][23][24][25], or augmenting source domain examples [26][27][28][29][30][31][32]. However, recent studies [33,34] have shown that simple baselines without learning invariant features are comparable or even outperform the existing DG methods on the diverse DG benchmarks with a fair hyperparameter selection protocol when a model becomes larger (e.g., from ResNet-18 to ResNet-50 [35]).…”
Section: Introductionmentioning
confidence: 99%
“…The main idea in DB is to utilize known biases (or identify unknown biases) in the data distribution, model these biases in the training pipeline, and use this knowledge to train robust classifiers (Clark et al, 2019;Bhargava et al, 2021). In the image classification literature, there is growing consensus on enforcing a consistency on different views (or augmentations) of an image in order to achieve debiasing (Hendrycks et al, 2020c;Xu et al, 2020;Chai et al, 2021;Nam et al, 2021). Unlike DF, model de-biasing does not directly alter the training distribution, but instead allows the model to learn which biases to ignore.…”
Section: Categorization Of Domain Generalization Methodsmentioning
confidence: 99%
“…• SagNet: Style Agnostic Networks by Nam et al [36] • ARM: Adaptive Risk Minimization by Zhang et al [37] • VREx: Variance Risk Extrapolation by Krueger et al [15] • RSC: Representation Self-Challenging by Huang et al [38] • SD: Spectral Decoupling by Pezeshki et al [39] • AND-mask: Learning Explanations that are Hard to Vary by Parascandolo et al [12] Employed Architecture In Table 5, we detail the architecture used for experimentation. For the MLP architecture, it's depth and width are defined as hyperparameter included in the hyperparameter search.…”
Section: Conditionmentioning
confidence: 99%