2020
DOI: 10.1016/j.patcog.2019.107124
|View full text |Cite
|
Sign up to set email alerts
|

Correlation-aware adversarial domain adaptation and generalization

Abstract: Domain adaptation (DA) and domain generalization (DG) have emerged as a solution to the domain shift problem where the distribution of the source and target data is different. The task of DG is more challenging than DA as the target data is totally unseen during the training phase in DG scenarios. The current state-of-the-art employs adversarial techniques, however, these are rarely considered for the DG problem. Furthermore, these approaches do not consider correlation alignment which has been proven highly b… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
37
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 110 publications
(38 citation statements)
references
References 31 publications
(100 reference statements)
0
37
0
Order By: Relevance
“…Kernel methods [20,47,48,21,49,50,51,52,53,22] Explicit feature alignment [54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71] Domain adversarial learning [72,73,59,74,75,76,77,41,78,79,23,80,81,82] Invariant risk minimization [83] Feature disentanglement…”
Section: Domain-invariant Representation Learningmentioning
confidence: 99%
“…Kernel methods [20,47,48,21,49,50,51,52,53,22] Explicit feature alignment [54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71] Domain adversarial learning [72,73,59,74,75,76,77,41,78,79,23,80,81,82] Invariant risk minimization [83] Feature disentanglement…”
Section: Domain-invariant Representation Learningmentioning
confidence: 99%
“…Another line of works optimizes for features that confuse a domain discriminator model (Albuquerque et al 2019;Shao et al 2019;Rahman et al 2020;Deng et al 2020), and includes DANN (Ganin et al 2016) and its class-conditional extension C-DANN (Li et al 2018c). Other works additionally involve the classifier in the representation alignment, either by optimizing for an embedding space such that the optimal linear classifier on top of it is the same across different domains (IRM) (Arjovsky et al 2019), or by passing a domain-specific mean embedding to the classifier as a second argument (MTL) (Blanchard et al 2017).…”
Section: Domain Generalizationmentioning
confidence: 99%
“…SUDA aims to learn a model well-performing on the target domain given a labeled source domain and an unlabeled target domain. It is usually achieved by discrepancy-based methods [6,19,20], adversarial learning [8,21,22,23,24], and self-training methods [25]. Tzeng et al [6] first proposed Maximum Mean Discrepancy (MMD) to measure the distance between domain distributions.…”
Section: Related Workmentioning
confidence: 99%
“…Zuo et al [21] applied different strategies to easy and tough examples to improve the domain adaptation performance. Rahman et al [22] combined correlation alignment with adversarial learning to tackle the domain adaptation and domain generalization problems. Liang et al [25] proposed to leverage the uncertainty of pseudo labels to achieve optimal feature transformation.…”
Section: Related Workmentioning
confidence: 99%