Neural Networks and Statistical Learning 2013
DOI: 10.1007/978-1-4471-5571-3_12
|View full text |Cite
|
Sign up to set email alerts
|

Principal Component Analysis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
4
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(6 citation statements)
references
References 134 publications
0
4
0
Order By: Relevance
“…We have selected PCA as the competing method to evaluate the performance of the proposed EPA approach. PCA is a classical linear transformation which transforms the original features to principal components (PCs), hence achieves effective dimension reduction (Du and Swamy, 2014).…”
Section: Competing Methods and Discussionmentioning
confidence: 99%
“…We have selected PCA as the competing method to evaluate the performance of the proposed EPA approach. PCA is a classical linear transformation which transforms the original features to principal components (PCs), hence achieves effective dimension reduction (Du and Swamy, 2014).…”
Section: Competing Methods and Discussionmentioning
confidence: 99%
“…Locality Sensitive Hashing (LSH) and its variants [22,23,24] are representative unsupervised hashing methods which generate hash functions in a random manner. Based on the data distribution information of images, Principal Component Hashing (PCH) [25] trains hash functions by principal component analysis [26] and utilizes the top-K principal components of the covariance matrix to construct its hashing projections. Iterative Quantization Hashing (ITQ) [12] learns the optimal rotation matrix for the data after the principle component analysis by minimizing the quantization error.…”
Section: Hashing Methods With Single Hash Tablementioning
confidence: 99%
“…Finally, the visualization method presented here is in fact a supervised learning algorithm, like supervised PCA (Koren and Carmel, 2004;Yu et al, 2006;Du et al, 2015) for instance. The difference is that our approach explicitly incorporates the loss function and the definition of similarity that we want to obtain at the end of the process.…”
Section: Related Workmentioning
confidence: 99%