2016
DOI: 10.1007/s11063-016-9532-z
|View full text |Cite
|
Sign up to set email alerts
|

Cost Sensitive Semi-Supervised Canonical Correlation Analysis for Multi-view Dimensionality Reduction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 10 publications
(3 citation statements)
references
References 33 publications
0
3
0
Order By: Relevance
“…And to our best knowledge, the performance of the CNN model was always relevant to the size of the training set. So in the future, some semi-supervised learning (SSL) (Wan et al 2016) or even unsupervised learning (Guo et al 2023) strategies will be attempted into the CAD method on the purpose of improving the behavior of our CAD method without increasing the number of labels.…”
Section: Discussionmentioning
confidence: 99%
“…And to our best knowledge, the performance of the CNN model was always relevant to the size of the training set. So in the future, some semi-supervised learning (SSL) (Wan et al 2016) or even unsupervised learning (Guo et al 2023) strategies will be attempted into the CAD method on the purpose of improving the behavior of our CAD method without increasing the number of labels.…”
Section: Discussionmentioning
confidence: 99%
“…Now turn to the solution of problem (15). Similarly to that of MCCA, it can also be transformed into a generalized eigenvalue decomposition problem      0C (12) • • •C (1m) C (21) 0 (2) . .…”
Section: B Lrmcca Solutionmentioning
confidence: 99%
“…We hope to find m suitable projection matrix P (i) ∈ R D i ×d m i=1 (d < D i ) to reduce the dimensions of multi-view data into a lower common dimension by P (i)T X (i) m i=1 . Among recent methods [16]- [20] the most representative is to apply the canonical correlation analysis (CCA) [16], [21], [22] to two views of data. The idea of CCA is to extract the canonical variables P (1)T X (1) and P (2)T X (2) from the mean-normalized two-view data X (1) = x (1) 1 , .…”
Section: Introductionmentioning
confidence: 99%