Proceedings of ICNN'95 - International Conference on Neural Networks
DOI: 10.1109/icnn.1995.488090
|View full text |Cite
|
Sign up to set email alerts
|

On the development of a neural network based orthogonal nonlinear principal component algorithm for process data analysis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(7 citation statements)
references
References 7 publications
0
7
0
Order By: Relevance
“…O-NLPCA algorithm develops orthogonal components directly from an autoassociative neural network using the Gram–Schmidt orthogonalization process . O-NLPCA adopts the cascade control strategy to incorporate the orthogonalization procedure into NLPCA.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…O-NLPCA algorithm develops orthogonal components directly from an autoassociative neural network using the Gram–Schmidt orthogonalization process . O-NLPCA adopts the cascade control strategy to incorporate the orthogonalization procedure into NLPCA.…”
Section: Methodsmentioning
confidence: 99%
“…Using the cumulative percent variance method, NCA picks five uncorrelated components for process monitoring; PCA also picks five principal components. For O-NLPCA, the number of principal components is determined as the method in ref taking 4.…”
Section: Simulation Study Of Ncamentioning
confidence: 99%
See 1 more Smart Citation
“…Another important characteristic of linear PCA is that the first principal component always captures the highest variance of the input data followed by the second and so on. In NLPCA, the data information tends to be evenly distributed among the principal components (Chessari, Barton et al 1995). In view of this drawback, a training algorithm using Gram-Schmidt process for NLPCA was proposed, in which the nonlinear scores produced were orthogonal at the end of training session (Chessari, Barton et al 1995).…”
Section: Nonlinear Pcamentioning
confidence: 99%
“…In NLPCA, the data information tends to be evenly distributed among the principal components (Chessari, Barton et al 1995). In view of this drawback, a training algorithm using Gram-Schmidt process for NLPCA was proposed, in which the nonlinear scores produced were orthogonal at the end of training session (Chessari, Barton et al 1995). Although the Gram-Schmidt scheme conceptually can provide some meaningful remedies for orthogonality, in practice it suffers a constraint of a trade-off between the main objective (overall convergence) and the secondary objective (orthogonal principal components).…”
Section: Nonlinear Pcamentioning
confidence: 99%