2021
DOI: 10.48550/arxiv.2109.01902
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Barycentric-alignment and invertibility for domain generalization

Abstract: For the Domain Generalization (DG) problem where the hypotheses are composed of a common representation function followed by a labeling function, we point out a shortcoming in existing approaches that fail to explicitly optimize for a term, appearing in a well-known and widely adopted upper bound to the risk on the unseen domain, that is dependent on the representation to be learned. To this end, we first derive a novel upper bound to the prediction risk. We show that imposing a mild assumption on the represen… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 25 publications
0
1
0
Order By: Relevance
“…(1) Domain Alignment: Aligning domain distributions and finding invariance between domains has been often studied with empirical results and theoretical proofs [20,28]. Specifically, researchers seek explicitly aligning feature distributions based on the maximum mean discrepancy (MMD) [56,79,82], or second order correlation [75,76,59], moment matching [58] and Wasserstein distance [95,50], etc. Besides aligning distributions in feature space, Arjovsky et al [3] propose IRM to learn an ideal invariant classifier on top of the representation space.…”
Section: Domainbed Resultsmentioning
confidence: 99%
“…(1) Domain Alignment: Aligning domain distributions and finding invariance between domains has been often studied with empirical results and theoretical proofs [20,28]. Specifically, researchers seek explicitly aligning feature distributions based on the maximum mean discrepancy (MMD) [56,79,82], or second order correlation [75,76,59], moment matching [58] and Wasserstein distance [95,50], etc. Besides aligning distributions in feature space, Arjovsky et al [3] propose IRM to learn an ideal invariant classifier on top of the representation space.…”
Section: Domainbed Resultsmentioning
confidence: 99%