2020
DOI: 10.1007/s00138-020-01093-2
|View full text |Cite
|
Sign up to set email alerts
|

Multi-source domain adaptation for image classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
3
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
8
1
1

Relationship

2
8

Authors

Journals

citations
Cited by 21 publications
(5 citation statements)
references
References 23 publications
0
3
0
Order By: Relevance
“…The scope of transfer learning is very wide, including homogeneous and heterogeneous transfer learning [ 19 ], visual domain adaptation [ 15 , 36 , 40 ] and cross-dataset recognition [ 50 ], which is applicable in various areas such as object recognition [ 8 ], text classification [ 16 ], speech recognition [ 12 ] and face recognition [ 49 ].…”
Section: Related Workmentioning
confidence: 99%
“…The scope of transfer learning is very wide, including homogeneous and heterogeneous transfer learning [ 19 ], visual domain adaptation [ 15 , 36 , 40 ] and cross-dataset recognition [ 50 ], which is applicable in various areas such as object recognition [ 8 ], text classification [ 16 ], speech recognition [ 12 ] and face recognition [ 49 ].…”
Section: Related Workmentioning
confidence: 99%
“…Kernel-based methods, like maximum mean discrepancy [23], try to align feature distributions of the involved domains by using a cost function on a shared feature extractor that should minimize the distance between distributions. This is applied for example in [24,25,26,27,28,29]. Approaches using adversarial methods [30,31] are mostly based on a regressor or a full domain classifier that is additionally attached to the shared feature extractor.…”
Section: Related Workmentioning
confidence: 99%
“…Similar to SDA, a straightforward approach for multi-source domain adaptation (MDA) to deal with multi-source data is also to merge all sources into one domain [39], which leads to an insufficient variance elimination in MSST [36]. In order to fully exploit multiple subjects' data distribution, some MDA began exploring feature representation approaches and combination of pre-learned classifiers [39][40][41][42][43]. The former approaches try to align the latent space of different domains based on optimizing the discrepancy loss, such as Rényi-divergence [48], L2 distance [49] or align the features through adversarial objectives, such as GAN loss [57], Wasserstein distance [58], [69].…”
Section: Introductionmentioning
confidence: 99%