Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence 2021
DOI: 10.24963/ijcai.2021/628
|View full text |Cite
|
Sign up to set email alerts
|

Generalizing to Unseen Domains: A Survey on Domain Generalization

Abstract: Domain generalization (DG), i.e., out-of-distribution generalization, has attracted increased interests in recent years. Domain generalization deals with a challenging setting where one or several different but related domain(s) are given, and the goal is to learn a model that can generalize to an unseen test domain. For years, great progress has been achieved. This paper presents the first review for recent advances in domain generalization. First, we provide a formal definition of domain generalization and d… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
40
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3
3

Relationship

0
9

Authors

Journals

citations
Cited by 195 publications
(50 citation statements)
references
References 123 publications
0
40
0
Order By: Relevance
“…Even though metalearning is a general learning framework, it is recently applied for Domain Generalization tasks [10] [25] [26]. For more details, we refer the reader to the recent surveys on Domain Generalization in [27] and [28].…”
Section: A Domain Generalizationmentioning
confidence: 99%
“…Even though metalearning is a general learning framework, it is recently applied for Domain Generalization tasks [10] [25] [26]. For more details, we refer the reader to the recent surveys on Domain Generalization in [27] and [28].…”
Section: A Domain Generalizationmentioning
confidence: 99%
“…Theorem 1 (Average risk estimation error bound for binary classification [10], [12]). Assume that the loss function is L -Lipschitz in its first argument and is bounded by B .…”
Section: Error Bounds For Mdgmentioning
confidence: 99%
“…The simplest solution to this problem is the vanilla empirical risk minimization (ERM) [11], which minimizes an average loss computed on data pooled together from all available source domains. The inability of this approach to exploit statistical discrepancies between domains has motivated the design of multi-domain learning techniques [12]. However, recently Gulrajani et al [13] reported that a powerful feature extractor coupled with effective model selection can make ERMs highly competitive on standard benchmarks.…”
Section: Introductionmentioning
confidence: 99%
“…Researchers have proposed many domain adaptation methods to address the above-mentioned problem [ 19 , 20 , 21 , 22 , 23 ]. CycleGAN [ 13 ] is a representative type of pixel-level domain adaptation method, which transforms the source domain data in the original pixel space into a style in the target domain, capturing pixel-level and low-level domain shifts.…”
Section: Introductionmentioning
confidence: 99%