2017
DOI: 10.48550/arxiv.1707.01217
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Wasserstein Distance Guided Representation Learning for Domain Adaptation

Abstract: Domain adaptation aims at generalizing a high-performance learner on a target domain via utilizing the knowledge distilled from a source domain which has a different but related data distribution. One solution to domain adaptation is to learn domain invariant feature representations while the learned representations should also be discriminative in prediction. To learn such representations, domain adaptation frameworks usually include a domain invariant representation learning approach to measure and reduce th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
68
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 123 publications
(68 citation statements)
references
References 2 publications
(3 reference statements)
0
68
0
Order By: Relevance
“…In practice, transfer distances are often given by fdivergences [24], such as KL-divergence or the Hellinger distance, Wasserstein distances [25], and maximum mean discrepancy [18], [26], [27]. Others use generative adversarial networks, a deep learning distribution modeling technique, to estimate divergence [28], [29].…”
Section: B Behavioral Considerationsmentioning
confidence: 99%
“…In practice, transfer distances are often given by fdivergences [24], such as KL-divergence or the Hellinger distance, Wasserstein distances [25], and maximum mean discrepancy [18], [26], [27]. Others use generative adversarial networks, a deep learning distribution modeling technique, to estimate divergence [28], [29].…”
Section: B Behavioral Considerationsmentioning
confidence: 99%
“…Transfer distance is usually referred to informally, e.g., to describe near or far transfer. It is implicit in the use of Wasserstein distance [37], maximum mean discrepancy [38], [39], generative adversarial networks [40], [41], and others, to calculate distributional-divergence-based components of loss functions in transfer learning algorithms. We consider transfer distance explicitly, in a way that may not necessarily be useful in calculating loss functions, but is interpretable to system designers and operators.…”
Section: Methodsmentioning
confidence: 99%
“…of source and target features. After Generative Adversarial Network [13] was proposed, more works [57,6,15,58,45,59] leverage a domain discriminator to encourage domain confusion by an adversarial objective. Recently, image translation methods [60,14] have been adopted to further improve domain adaptation by performing domain alignment at pixel-level [15,11,16,61,62,17,63].…”
Section: Appendix a Additional Dataset Detailsmentioning
confidence: 99%