Proceedings of the 2013 SIAM International Conference on Data Mining 2013
DOI: 10.1137/1.9781611972832.60
|View full text |Cite
|
Sign up to set email alerts
|

Discriminative Transfer Learning on Manifold

Abstract: Collective matrix factorization has achieved a remarkable success in document classification in the literature of transfer learning. However, the learned latent factors still suffer from the divergence between different domains and thus are usually not discriminative for an appropriate assignment of category labels. Based on these observations, we impose a discriminative regression model over the latent factors to enhance the capability of label prediction. Moreover, we propose to minimize the Maximum Mean Dis… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(5 citation statements)
references
References 20 publications
0
5
0
Order By: Relevance
“…Next, from the form of the constraint gradients ∇ t g j in (28), and the constraint that the columns of T must have unit norm at a local solution, it is easy to observe that the set of the equality constraint gradients {∇ t g j , j = 1, . .…”
Section: Appendix B Convergence Analysis Of the Sqp Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Next, from the form of the constraint gradients ∇ t g j in (28), and the constraint that the columns of T must have unit norm at a local solution, it is easy to observe that the set of the equality constraint gradients {∇ t g j , j = 1, . .…”
Section: Appendix B Convergence Analysis Of the Sqp Methodsmentioning
confidence: 99%
“…Another domain adaptation solution consists of learning a transformation or a projection that aligns the source and the target data [2], [3], [27], [28], [29], [30], [31]. In fact, the idea of aligning the source and the target domains by mapping them to an intermediate space through a transformation has been at the core of many domain adaptation algorithms, some of which can also be applied to problems where the source and the target samples reside in different ambient spaces [32].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…A quite prevalent approach in the literature is to align the source and the target domains using a transformation or a projection [3,4,5,22,23,24,25,26,27,28]. In the assessment of the accuracy of aligning the two distributions, some widely used metrics are the maximum mean discrepancy [29,30,31,6,32,7]; Wasserstein distance [33]; and distribution divergence measures [34].…”
Section: Related Workmentioning
confidence: 99%
“…A classifier is trained on the training data, which is then used to estimate the unknown class labels of the test data. On the other hand, domain adaptation methods consider a setting where the distribution of the test data is different from that of the training data [1][2][3][4]. Given many labeled samples in a source domain and much fewer labeled samples in a target domain, domain adaptation algorithms exploit the information available in both domains in order to improve the performance of classification in the target domain.…”
Section: Introductionmentioning
confidence: 99%