2019
DOI: 10.48550/arxiv.1901.00282
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

On Minimum Discrepancy Estimation for Deep Domain Adaptation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 0 publications
0
1
0
Order By: Relevance
“…The method adds different distance loss functions to the artificial neural network. The most widely used metric schemas include Maximum Mean Discrepancy (MMD) [8][9][10], KL (Kullback-Leibler) divergence [11], JS (Jensen-Shannon) divergence [12], Wasserstein distance [13][14][15], CORAL (CORrelation ALignment) [16,17], etc.…”
Section: Introductionmentioning
confidence: 99%
“…The method adds different distance loss functions to the artificial neural network. The most widely used metric schemas include Maximum Mean Discrepancy (MMD) [8][9][10], KL (Kullback-Leibler) divergence [11], JS (Jensen-Shannon) divergence [12], Wasserstein distance [13][14][15], CORAL (CORrelation ALignment) [16,17], etc.…”
Section: Introductionmentioning
confidence: 99%