2016
DOI: 10.1002/aic.15062
|View full text |Cite
|
Sign up to set email alerts
|

A systematic comparison of PCA‐based Statistical Process Monitoring methods for high‐dimensional, time‐dependent Processes

Abstract: High‐dimensional and time‐dependent data pose significant challenges to Statistical Process Monitoring. Most of the high‐dimensional methodologies to cope with these challenges rely on some form of Principal Component Analysis (PCA) model, usually classified as nonadaptive and adaptive. Nonadaptive methods include the static PCA approach and Dynamic Principal Component Analysis (DPCA) for data with autocorrelation. Methods, such as DPCA with Decorrelated Residuals, extend DPCA to further reduce the effects of … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
47
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
8

Relationship

1
7

Authors

Journals

citations
Cited by 92 publications
(50 citation statements)
references
References 32 publications
1
47
0
Order By: Relevance
“…Because in this case the Q‐statistic is based on a scarcely labeled data set, this limit is not an optimal choice. On the other hand, the popular limit given by Jackson and Mudholkar assumes that residual eigenvalues are small, which is not always true . Hence, the limit proposed by Box is considered in this case, which is also the preferred limit in other works aimed for industrial applications …”
Section: Methodsmentioning
confidence: 99%
“…Because in this case the Q‐statistic is based on a scarcely labeled data set, this limit is not an optimal choice. On the other hand, the popular limit given by Jackson and Mudholkar assumes that residual eigenvalues are small, which is not always true . Hence, the limit proposed by Box is considered in this case, which is also the preferred limit in other works aimed for industrial applications …”
Section: Methodsmentioning
confidence: 99%
“…One common particularity of industrial systems is that because of the high‐frequency nature of the data, the predictors' data collected in time are serially dependent (autocorrelated). This phenomenon can lead to various problems in modelling . In the case of labeled data, autocorrelation should not pose a problem since the data are collected at distant time intervals, usually because of a high cost of sampling and inspection.…”
Section: Methodsmentioning
confidence: 99%
“…This phenomenon can lead to various problems in modelling. [18][19][20] In the case of labeled data, autocorrelation should not pose a problem since the data are collected at distant time intervals, usually because of a high cost of sampling and inspection. Autocorrelation can be a problem in the case of unlabeled data as the data are sampled at a high frequency.…”
Section: Methodsmentioning
confidence: 99%
“…The historical data‐based FDD methods, also known as data‐driven FDD methods, can be classified into statistical methods, shallow machine learning methods, and deep learning methods. The statistical methods include principal component analysis (PCA), partial least square (PLS), independent component analysis (ICA), Fisher discriminant analysis (FDA), and Bayesian theory . Shallow machine learning methods refer to the FDD methods that based on traditional machine learning models other than deep neural networks, including shallow artificial neural network (ANN), support vector machine (SVM), artificial immune system (AIS), k‐nearest neighbour (KNN), and Gaussian mixture model (GMM) .…”
Section: Related Workmentioning
confidence: 99%