2017 IEEE International Conference on Data Mining (ICDM) 2017
DOI: 10.1109/icdm.2017.150
|View full text |Cite
|
Sign up to set email alerts
|

Balanced Distribution Adaptation for Transfer Learning

Abstract: Transfer learning has achieved promising results by leveraging knowledge from the source domain to annotate the target domain which has few or none labels. Existing methods often seek to minimize the distribution divergence between domains, such as the marginal distribution, the conditional distribution or both. However, these two distances are often treated equally in existing algorithms, which will result in poor performance in real applications. Moreover, existing methods usually assume that the dataset is … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
259
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
3
3
1

Relationship

2
5

Authors

Journals

citations
Cited by 421 publications
(261 citation statements)
references
References 20 publications
2
259
0
Order By: Relevance
“…4. It can be seen that the classification accuracy varies with different values of ω, which indicates the necessity to consider the different effects can be verified in BDA [8] and MEDA [7]), but also in adversarial transfer learning. Moreover, We find that the value of optimal ω varies on different tasks and even for the same task, ω may have several optimal values.…”
Section: E Analysis Of the Importance Of The Dynamic Adversarial Facmentioning
confidence: 99%
See 1 more Smart Citation
“…4. It can be seen that the classification accuracy varies with different values of ω, which indicates the necessity to consider the different effects can be verified in BDA [8] and MEDA [7]), but also in adversarial transfer learning. Moreover, We find that the value of optimal ω varies on different tasks and even for the same task, ω may have several optimal values.…”
Section: E Analysis Of the Importance Of The Dynamic Adversarial Facmentioning
confidence: 99%
“…1), the local distribution should be given more attention. Two more recent work called Balanced Distribution Adaptation (BDA) [8] and Manifold Embedded Distribution Alignment (MEDA) [7] proposed to adaptively align these two distributions, while it is based on kernel method with high computational cost. In addition, MEDA is incapable of handling large-scale data.…”
Section: Introductionmentioning
confidence: 99%
“…Others extended JDA by adding regularization [39], sparse representation [67], structural consistency [32], domain invariant clustering [55], and label propagation [74]. The work of Balanced Distribution Adaptation (BDA) [62] firstly proposed to manually weight the two distributions. The main differences between DDA (MDDA) and these methods are: 1) These work treats the two distributions equally.…”
Section: Distribution Alignmentmentioning
confidence: 99%
“…To cope with the difference in distributions between domains, existing works can be summarized into two main categories: (a) instance reweighting [16,68], which reuses samples from the source domain according to some weighting technique; and (b) feature matching, which either performs subspace learning by exploiting the subspace geometrical structure [20,25,52,64], or distribution alignment to reduce the marginal or conditional distribution divergence between domains [40,62,74]. Recently, the success of deep learning has dramatically increased the performance of transfer learning either via deep representation learning [8,31,41,66,70,78] or adversarial learning [22,23,38,51,75].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation