2019
DOI: 10.48550/arxiv.1906.03148
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Unsupervised and Supervised Principal Component Analysis: Tutorial

Abstract: This is a detailed tutorial paper which explains the Principal Component Analysis (PCA), Supervised PCA (SPCA), kernel PCA, and kernel SPCA. We start with projection, PCA with eigendecomposition, PCA with one and multiple projection directions, properties of the projection matrix, reconstruction error minimization, and we connect to auto-encoder. Then, PCA with singular value decomposition, dual PCA, and kernel PCA are covered. SPCA using both scoring and Hilbert-Schmidt independence criterion are explained. K… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
36
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
7
1

Relationship

6
2

Authors

Journals

citations
Cited by 12 publications
(37 citation statements)
references
References 30 publications
0
36
0
Order By: Relevance
“…Relevant Component Analysis (RCA) (Shental et al, 2002) is a metric learning method. In this method, we first apply Principal Component Analysis (PCA) (Ghojogh & Crowley, 2019) on data using the total scatter of data. Let the projection matrix of PCA be denoted by U .…”
Section: Relevant Component Analysis (Rca)mentioning
confidence: 99%
See 2 more Smart Citations
“…Relevant Component Analysis (RCA) (Shental et al, 2002) is a metric learning method. In this method, we first apply Principal Component Analysis (PCA) (Ghojogh & Crowley, 2019) on data using the total scatter of data. Let the projection matrix of PCA be denoted by U .…”
Section: Relevant Component Analysis (Rca)mentioning
confidence: 99%
“…Inspired by Fisher discriminant analysis (Fisher, 1936;Ghojogh et al, 2019b), we maximize the inter-class variances of projected data, v r=1 tr(U S (r) b U ), to discriminate the classes after projection. Also, inspired by principal component analysis (Ghojogh & Crowley, 2019), we maximize the total scatter of projected data, v r=1 tr(U S (r) t U ), for expressiveness. Moreover, we maximize the dependence of the projected data in all views because various views of a point should be related.…”
Section: Regularization By Locally Linearmentioning
confidence: 99%
See 1 more Smart Citation
“…Convergence analysis of Laplacian eigenmap methods can be found in several papers (Belkin & Niyogi, 2006;Singer, 2006). Inspired by eigenfaces (Turk & Pentland, 1991) and Fisherfaces (Belhumeur et al, 1997) which have been proposed based on principal component analysis (Ghojogh & Crowley, 2019) and Fisher discriminant analysis (Ghojogh et al, 2019b), respectively, Laplacianfaces (He et al, 2005) has been proposed for face recognition using Laplacian eigenmaps. Finally, a recent survey on Laplacian eigenmap is (Li et al, 2019).…”
Section: Other Improvements Over Laplacian Eigenmapmentioning
confidence: 99%
“…A series of supervised (single-task) learning methods were proposed which rely on PCA [7,41,53,17]: the central idea is to project the available data onto a shared low-dimensional space, thus ignoring individual data variations. These algorithms are generically coined supervised principal component analysis (SPCA).…”
Section: Related Workmentioning
confidence: 99%