2001
DOI: 10.1034/j.1600-0870.2001.00251.x
|View full text |Cite
|
Sign up to set email alerts
|

Nonlinear principal component analysis by neural networks

Abstract: Abstract-With very noisy data, overfitting is a serious problem in pattern recognition. For nonlinear regression, having plentiful data eliminates overfitting, but for nonlinear principal component analysis (NLPCA), overfitting persists even with plentiful data. Thus simply minimizing mean square error is not a sufficient criterion for NLPCA to find good solutions in noisy data.A new index is proposed which measures the disparity between the nonlinear principal components u andũ for a data point x and its near… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
105
0

Year Published

2007
2007
2023
2023

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 90 publications
(107 citation statements)
references
References 3 publications
1
105
0
Order By: Relevance
“…There is an important difference between PCA and rotated PCA methods; as it is generally impossible to have a simultaneous solution that explains the maximum global variance of the data and approaching pattern recognition. NLPCA can give both types of information, thus the nonlinearity in NLPCA unifies the PCA and rotated PCA approaches (Hsieh 2001). In this paper, in terms of variance, the first rotated PCA explained 31% of the total variance (not shown), versus 36% using the first PCA mode.…”
Section: Discussionmentioning
confidence: 90%
See 4 more Smart Citations
“…There is an important difference between PCA and rotated PCA methods; as it is generally impossible to have a simultaneous solution that explains the maximum global variance of the data and approaching pattern recognition. NLPCA can give both types of information, thus the nonlinearity in NLPCA unifies the PCA and rotated PCA approaches (Hsieh 2001). In this paper, in terms of variance, the first rotated PCA explained 31% of the total variance (not shown), versus 36% using the first PCA mode.…”
Section: Discussionmentioning
confidence: 90%
“…That the classical PCA is indeed a linear version of this NLPCA can be readily seen by replacing all of the transfer functions with the identity function, thereby removing the NLPCA nonlinear modeling capability (Hsieh 2001). The forward map to u then involve only a linear combination of the original variables as in the PCA.…”
Section: Nlpcamentioning
confidence: 99%
See 3 more Smart Citations