1995
DOI: 10.1016/0893-6080(94)00098-7
|View full text |Cite
|
Sign up to set email alerts
|

Generalizations of principal component analysis, optimization problems, and neural networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
125
0
2

Year Published

1996
1996
2021
2021

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 230 publications
(127 citation statements)
references
References 16 publications
0
125
0
2
Order By: Relevance
“…with U ∈ St(n, r) that spans the subspace of interest [13,14]. Note that (16) remains invariant by the transformation U → U O with O ∈ O(r).…”
Section: Connection With Subspace Learningmentioning
confidence: 99%
“…with U ∈ St(n, r) that spans the subspace of interest [13,14]. Note that (16) remains invariant by the transformation U → U O with O ∈ O(r).…”
Section: Connection With Subspace Learningmentioning
confidence: 99%
“…PCA provides an optimal solution to several signal representation problems, making it useful for feature extraction and dimensionality reduction in a wide range of applications [8], [7]. Its main drawback is that it assumes a Gaussian distribution of the data.…”
Section: An Affine Invariant Function Using Pca Basesmentioning
confidence: 99%
“…The extension to nonlinear PCA (NLPCA) is not unique, due to the lack of a unified mathematical structure and an efficient and reliable algorithm, and in some cases due to excessive freedom in selection of representative basis functions [51,28]. Several methods have been proposed for nonlinear PCA such as, the five-layer feedforward associative network [41] and the kernel PCA [66].…”
Section: Nonlinear Pca and Principal Manifoldsmentioning
confidence: 99%