Root cause analysis of rare but catastrophic events in the chemical process industry must deal with the challenges of data scarcity that may lead to inaccurate diagnosis. Previously, Bayesian models (BMs) have been applied with fault trees to account for data scarcity. However, the BM does not account for source-to-source variability in collected data. To deal with this limitation, this work proposes a new framework to simultaneously handle data scarcity and source-to-source variability. For the purpose of computational efficiency, it first identifies key process variables (KPVs) for rare events using a sequential combination of relative information gain and Pearson correlation coefficient. Then, it performs the root cause analysis of KPV deviations using the Hierarchical Bayesian Model with an informative prior constructed from process data to handle source-to-source variability. Finally, performance of the proposed framework is demonstrated through a case study of the Tennessee Eastman process.
Energy efficient, on-target product purity operation of a high-purity three product benzene−toluene−xylene ternary Petlyuk column is studied. The basic regulatory control system consists of four temperature inferential control loops with a fixed prefractionator vapor-to-fresh feed ratio. An economic control system on top of the regulatory layer adjusts these five set points. It consists of three product purity controllers that adjust three temperature set points along with a reboiler duty reduction controller that adjusts the remaining two free set points in the regulatory layer. The latter makes these adjustments to prevent the downward curvature in the prefractionator and main column middle section temperature profiles from being too large. Closed loop results for large feed composition changes show significant energy savings (up to 15%) are realized via temperature profile curvature control compared to constant set point column operation. The case study highlights the need for innovative control strategies for realizing the sustainability benefit of the integrated complex Petlyuk column during actual operation.
■ INTRODUCTIONIn the process industry, distillation remains the most preferred and widely used unit operation for separating liquid mixtures into constituent pure (pseudo) components. 1 The basic idea is to utilize the difference in the volatility of the mixture components to purify it by repeated flashing. This is accomplished via countercurrent vapor−liquid contact on the trays of a simple distillation column with the reboiler providing the vapor stream into the bottom and the condenser providing refluxed liquid to the top of the column. The process is then naturally energy intensive with the reboiler heat driving the separation so that distillation alone can contribute up to 53% of plant energy costs. 2 Thus, innovations toward energy efficient distillation configurations for a given separation task have traditionally been of interest to the process industry. The volatility in energy prices in recent years has renewed interest in the synthesis, design, operation, and control of complex column configurations that can be significantly more energy efficient than a conventional light-out-first (direct sequence) or heavy-out-first (indirect sequence) train of simple distillation columns.In pioneering work, Petlyuk et al. 3 suggested a complex configuration consisting of a prefractionator followed by a main column with a side draw for separating a ternary ideal mixture into its constituent pure components (Figure 1a). Compared to a conventional two-column direct or indirect sequence, the prefractionator in the Petlyuk configuration mitigates remixing of the middle boiler, which distributes itself between the prefractionator top and bottom products. This reduces the inherent process irreversibility, leading to potentially significant energy savings. Literature reports (see e.g. Triantafyllou and Smith 4 ) indicate impressive energy savings up to 40% for a Petlyuk configuration over a conventional two-column...
In chemical processes, Bayesian network (BN)-based approaches have been extensively applied for process fault diagnosis. Generally, BN is learned using score and search algorithms where search algorithms create candidate networks whose fitness to data is measured by scores. However, existing approaches cannot utilize cyclic loop knowledge while learning BN. Since cyclic loops are prevalent in chemical processes, their unaccountability results in inaccurate BN and reduces diagnosis accuracy. Therefore, for accurate diagnosis, we propose direct transfer entropy (DTE)-based multiblock BN to discover cyclic loops while learning BN. First, the process is segmented into multiple blocks. Next, block-level BNs are learned using DTE-based score and Greedy search. By eliminating the common source variable effect, DTE finds correct causality between process variables and obtains accurate block-level BNs, which are fused to discover significant cyclic loops that were otherwise unachievable when finding BN. The performance of the developed methodology is demonstrated through a benchmark process.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.