2012
DOI: 10.5402/2012/847305
|View full text |Cite
|
Sign up to set email alerts
|

Neural Network Implementations for PCA and Its Extensions

Abstract: Many information processing problems can be transformed into some form of eigenvalue or singular value problems. Eigenvalue decomposition (EVD) and singular value decomposition (SVD) are usually used for solving these problems. In this paper, we give an introduction to various neural network implementations and algorithms for principal component analysis (PCA) and its various extensions. PCA is a statistical method that is directly related to EVD and SVD. Minor component analysis (MCA) is a variant of PCA, whi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
29
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
4
2
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 31 publications
(29 citation statements)
references
References 141 publications
(250 reference statements)
0
29
0
Order By: Relevance
“…A more plausible approach is based on “competitive learning” as described in Hertz et al (1991), which employs Oja’s rule (modified Hebbian learning with multiplicative normalization). This model was shown to perform PCA, constituting a powerful, biologically-plausible alternative for feature extraction and dimensionality reduction (Oja, 1982), see also Qiu et al (2012) for a detailed review on neural networks implementing PCA or non-linear extensions of PCA. In a recent study, competitive unsupervised learning applied to the lower layers of a feedforward neural network was shown to successfully lead to good generalization performance on the MNIST dataset (Krotov & Hopfield, 2019).…”
Section: Discussionmentioning
confidence: 99%
“…A more plausible approach is based on “competitive learning” as described in Hertz et al (1991), which employs Oja’s rule (modified Hebbian learning with multiplicative normalization). This model was shown to perform PCA, constituting a powerful, biologically-plausible alternative for feature extraction and dimensionality reduction (Oja, 1982), see also Qiu et al (2012) for a detailed review on neural networks implementing PCA or non-linear extensions of PCA. In a recent study, competitive unsupervised learning applied to the lower layers of a feedforward neural network was shown to successfully lead to good generalization performance on the MNIST dataset (Krotov & Hopfield, 2019).…”
Section: Discussionmentioning
confidence: 99%
“…If we take Λ = I, the dynamical system (8) reduces to that of Pehlevan, Sengupta, and Chklovskii [3, Theorem 1]. They proved that, at any stable fixed point (M FP , W FP ), M −1 FP W FP has orthogonal rows spanning the principal subspace of X.…”
Section: A Stability Of Dynamical Systemsmentioning
confidence: 99%
“…The dynamics (8) and (9) seem to require the matrix inverse M −1 (s) which is cumbersome for a biologically plausible neural network and required iterations in the previous similarity matching networks. However, Theorems 1 and 2 show that at fixed points M FP is diagonal, which forms the basis of our approach for iteration-free dynamics.…”
Section: B Avoiding Matrix Inversionmentioning
confidence: 99%
See 1 more Smart Citation
“…forward direction links), consisting of three layers; the input, hidden and output layers [15]. This kind of networks is called the autoassociative network [16]. The performance of a MLP depends on its generalization capability, meaning the data representation.…”
Section: Methodsmentioning
confidence: 99%