Process monitoring is crucial for maintaining favorable operating conditions and has received considerable attention in previous decades. Currently, a plant-wide process generally consists of multiple operational units and a large number of measured variables. The correlation among the variables and units is complex and results in the imperative but challenging monitoring of such plant-wide processes. With the rapid advancement of industrial sensing techniques, process data with meaningful process information are collected. Data-driven multivariate statistical plant-wide process monitoring (DMSPPM) has become popular. The key idea of DMSPPM is first decomposing a plantwide process into multiple subprocesses and then establishing a data-driven model for monitoring the process, in which process variable decomposition is important for guaranteeing the monitoring performance. In the current review, we first introduce the basics of multivariate statistical process monitoring and highlight the necessity of designing a distributed monitoring scheme. Then state-of-the-art DMSPPM methods are revisited. Finally, opportunities of and challenges to the DMSPPM methods are discussed.
Multivariate
statistical process monitoring (MSPM) can conduct
dimensionality reduction on process variables and can obtain low-dimensional
representations that capture most of the information in the original
data space. However, most MSPM models are developed under unsupervised
situations. Therefore, any abandoned information may deteriorate the
process monitoring performance. To address both issues (i.e., dimension
reduction and information preservation), this paper proposes a distributed
statistical process monitoring scheme. The proposed method employs
principal component analysis to derive four distinct and explicable
subspaces from the original process variables according to their relevance
or irrelevance to principal component subspace and residual subspace.
Each subspace serves as a low-dimensional representation of the original
data space, thereby preserving the information of the original data
space without undergoing information loss. A squared Mahalanobis distance,
which is introduced as the monitoring statistic, was calculated directly
in each subspace for fault detection. The Bayesian inference was then
introduced as the decision fusion strategy to obtain a final and unique
probability index. The feasibility and superiority of the proposed
method was investigated by conducting a case study of the well-known
Tennessee Eastman process.
Sensitive principal component analysis (SPCA) is proposed
to improve the principal component analysis (PCA) based chemical process
monitoring performance, by solving the information loss problem and
reducing nondetection rates of the T
2 statistic.
Generally, principal components (PCs) selection in the PCA-based process
monitoring is subjective, which can lead to information loss and poor
monitoring performance. The SPCA method is to subsequently build a
conventional PCA model based on normal samples, index PCs which reflect
the dominant variation of abnormal observations, and use these sensitive
PCs (SPCs) to monitor the process. Moreover, a novel fault diagnosis
approach based on SPCA is also proposed due to SPCs’ ability
to represent the main characteristic of the fault. The case studies
on the Tennessee Eastman process demonstrate the effect of SPCA on
online monitoring, showing its performance is significantly better
than that of the classical PCA methods.
The performance of the differential evolution (DE) algorithm is significantly affected by the choice of mutation strategies and control parameters. Maintaining the search capability of various control parameter combinations throughout the entire evolution process is also a key issue. A self-adaptive DE algorithm with zoning evolution of control parameters and adaptive mutation strategies is proposed in this paper. In the proposed algorithm, the mutation strategies are automatically adjusted with population evolution, and the control parameters evolve in their own zoning to self-adapt and discover near optimal values autonomously. The proposed algorithm is compared with five state-of-the-art DE algorithm variants according to a set of benchmark test functions. Furthermore, seven nonparametric statistical tests are implemented to analyze the experimental results. The results indicate that the overall performance of the proposed algorithm is better than those of the five existing improved algorithms.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.