2021
DOI: 10.1007/s10044-021-01002-x
|View full text |Cite
|
Sign up to set email alerts
|

Domain adaption based on source dictionary regularized RKHS subspace learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 32 publications
0
5
0
Order By: Relevance
“…Here, we compare our MSEpRKHS-DA model with seven methods on four real dataset to evaluate its performance. And these methods are TCA [12], SSTCA [12], IGLDA [14], TIT [16], CDSPP [20], SDRKHS-DA [19], and LPJT [18]. And the knn (k = 1) classifier is used.…”
Section: B Compare Mseprkhs-da Model With Some State-of-the-art Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Here, we compare our MSEpRKHS-DA model with seven methods on four real dataset to evaluate its performance. And these methods are TCA [12], SSTCA [12], IGLDA [14], TIT [16], CDSPP [20], SDRKHS-DA [19], and LPJT [18]. And the knn (k = 1) classifier is used.…”
Section: B Compare Mseprkhs-da Model With Some State-of-the-art Methodsmentioning
confidence: 99%
“…What's more, as in TIT, LPJT also utilizes the two different transformations for subspace learning, one for each domain, so LPJT can be applied in HDA too. For the SDRKHS-DA method [19], it also considers aligning the distributions by MMD criterion, in the meantime, introduces the dictionary learning into the RKHS subspace learning framework. That is, SDRKHS-DA uses source domain data as dictionary to code the target domain data and keeps the coding as sparse as possible at the same time, which makes the same kind of source domain data and target domain data close to each other in subspace to improve the performance of domain adaption.…”
Section: Related Workmentioning
confidence: 99%
“…, that is defined in (17), also grows as W grows and L increases from L i to L i+1 = L i + M . Matrix U is used to find vector h in (20) for SA and can be updated as (refer to appendix A.1):…”
Section: Adding Data Vectorsmentioning
confidence: 99%
“…where v T m = u T m W i Λ or if we prefer v m = ΛW T i u m . Note that v m is defined similar to equation (31), but elements m are set to zero and C i W i Λ is defined similar to U i in (17), with columns m equal to zero; thus Û equals U i + u m α m v T m , with columns m set to zero. Equation ( 38) can be rewritten as…”
Section: Removing Data Vectorsmentioning
confidence: 99%
See 1 more Smart Citation