Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence 2017
DOI: 10.24963/ijcai.2017/454
|View full text |Cite
|
Sign up to set email alerts
|

Learning Discriminative Correlation Subspace for Heterogeneous Domain Adaptation

Abstract: Domain adaptation aims to reduce the effort on collecting and annotating target data by leveraging knowledge from a different source domain. The domain adaptation problem will become extremely challenging when the feature spaces of the source and target domains are different, which is also known as the heterogeneous domain adaptation (HDA) problem. In this paper, we propose a novel HDA method to find the optimal discriminative correlation subspace for the source and target data. The discriminative correlation … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
18
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 45 publications
(18 citation statements)
references
References 16 publications
0
18
0
Order By: Relevance
“…Cross-Domain Landmark Selection (CDLS) [30] identifies the representative landmarks by matching the cross-domain data distributions and reducing the domain discrepancy. Yan et al [36] introduced Discriminative Correlation Analysis (DCA), which jointly optimizes the canonical correlation subspace and the discriminative ability of the classifier. Progressive Alignment (PA) [18] iteratively optimizes the latent feature space by dictionary-sharing sparse coding and reduces the cross-domain distribution discrepancy.…”
Section: Related Workmentioning
confidence: 99%
“…Cross-Domain Landmark Selection (CDLS) [30] identifies the representative landmarks by matching the cross-domain data distributions and reducing the domain discrepancy. Yan et al [36] introduced Discriminative Correlation Analysis (DCA), which jointly optimizes the canonical correlation subspace and the discriminative ability of the classifier. Progressive Alignment (PA) [18] iteratively optimizes the latent feature space by dictionary-sharing sparse coding and reduces the cross-domain distribution discrepancy.…”
Section: Related Workmentioning
confidence: 99%
“…Additionally, the number of metrics used in different datasets has the discrepancy. For example, AEEEM adopts 69 metrics while NASA [13] has 38. The features included in different datasets are not identical.…”
Section: A Motivationmentioning
confidence: 99%
“…Yan et al [37] proposed a semi-supervised algorithm for heterogeneous domain adaptation by exploiting the theory of optimal transport. The method can also be used to find the optimal discriminative correlation subspace for the source and target data [38].…”
Section: B Heterogeneous Transfer Learning On Software Defect Predictionmentioning
confidence: 99%
“…Most of the feature-representation-transfer learning methods can be used in homogeneous transfer learning [21], [22]. As for the heterogeneous transfer learning, Yan et al [23] proposed to find the discriminative feature subspace inherited from the canonical correlation space between the source and target data. To apply this method into online transfer learning, Yan [24] proposed to mine offline knowledge and online knowledge in different domains by using hedge ensemble.…”
Section: B Feature-representation-transfer Learningmentioning
confidence: 99%