2012 IEEE 12th International Conference on Data Mining 2012
DOI: 10.1109/icdm.2012.85
|View full text |Cite
|
Sign up to set email alerts
|

Adapting Component Analysis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 15 publications
(8 citation statements)
references
References 24 publications
(34 reference statements)
0
7
0
Order By: Relevance
“…The Hilbert Schmidt independence (HSI) that is formulated with Hilbert Schmidt norm in RKHS is used to measure the dependency between two random variables [24]. To preserve the important feature of data, Dorri et al regarded original and transformed data features as two random variants, and maximized their dependency during the probability distribution alignment [39,40]. Yan et al minimized the dependency between the projected and domain features (e.g.…”
Section: B Hilbert Schmidt Independencementioning
confidence: 99%
See 1 more Smart Citation
“…The Hilbert Schmidt independence (HSI) that is formulated with Hilbert Schmidt norm in RKHS is used to measure the dependency between two random variables [24]. To preserve the important feature of data, Dorri et al regarded original and transformed data features as two random variants, and maximized their dependency during the probability distribution alignment [39,40]. Yan et al minimized the dependency between the projected and domain features (e.g.…”
Section: B Hilbert Schmidt Independencementioning
confidence: 99%
“…Different from [24,[39][40][41][42], we utilize the HSI with its uncentered variant for consistency with the JMMD, and combine the JMMD and HSI in a neat format with a novel MMD matrix, i.e., (M j − δM h ). Thus, the first discovery from the unified JMMD to improve its feature discriminability is finalized as,…”
Section: A a Novel MMD Matrixmentioning
confidence: 99%
“…[8] extended MMDE by Transfer Component Analysis (TCA), which learns a kernel in RKHS. [32] adopted a similar idea. [33] learns target predictive function with a low variance.…”
Section: B Transfer Learningmentioning
confidence: 99%
“…Distribution alignment, feature selection, and subspace learning are three frequently used methods in traditional domain adaptation. There are also many methods that address the different kinds of distribution alignment, from marginal distribution alignment [32,6,27,18], to conditional distribution alignment [12,47] and finally joint alignment of these two distributions [46,48]. Feature selection methods aim to find the shared features between source and target domain [1,27].…”
Section: Related Workmentioning
confidence: 99%