2019 Chinese Control and Decision Conference (CCDC) 2019
DOI: 10.1109/ccdc.2019.8832617
|View full text |Cite
|
Sign up to set email alerts
|

Efficient recursive kernel principal component analysis for nonlinear time-varying processes monitoring

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
1
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 34 publications
0
1
0
Order By: Relevance
“…Nonlinear principal component analysis (NLPCA) is a nonlinear generalization of standard PCA using the principal curve technique, which involves moving from straight lines to curves. To implement the NLPCA method, the authors of [12,13] developed an auto-associative neural network and simulated the dynamics of continuous chemical reactors using linear and nonlinear principal component methods. The nonlinear principal components are determined by a feedforward neural network with one hidden layer.…”
Section: Multivariate Statistical Process Control (Mspc)mentioning
confidence: 99%
“…Nonlinear principal component analysis (NLPCA) is a nonlinear generalization of standard PCA using the principal curve technique, which involves moving from straight lines to curves. To implement the NLPCA method, the authors of [12,13] developed an auto-associative neural network and simulated the dynamics of continuous chemical reactors using linear and nonlinear principal component methods. The nonlinear principal components are determined by a feedforward neural network with one hidden layer.…”
Section: Multivariate Statistical Process Control (Mspc)mentioning
confidence: 99%
“…Other related works that utilize the moving window concept can be found in [190,[207][208][209]238,293]. A different adaptive approach is to use multivariate EWMA to update any part of the model, such as the kernel matrix, its eigen-decomposition, or the statistical indices [116,132,179,224,253,281,283,292]. Finally, for the dictionary learning approach by Fezai et al [246,247] (see Section 4.8), the Woodbury matrix identity is required to update the inverse of the kernel matrix, thereby updating the dictionary of kernel features as well.…”
Section: Time-varying Behavior and Adaptive Kernel Computationmentioning
confidence: 99%