“…This approach to optimization has also been applied to real-world problems and, in particular, to chemical industry, for example in real-time optimization of hydrocracking [19], in batch bio-process optimization for finding alternatives to fossil based materials [20], in batch optimization of bioreactors for food industry [21], in real-time detection of pollution risk due to wastewater [22] and in the analysis of material qualities like hardness of aluminum alloys [23]. It has also been applied in other domains such as health care, for melanoma's gene regulation [24] or protein folding problems in the fight against hereditary diseases [25], or in the field of energy, as in [26] to manage the electric power in a building or a small city, or in [27] to maximize electrical energy generation with acceptable emission levels.…”
Numerical optimization solves problems that are analytically intractable at the cost of arriving at a sufficiently good but rarely optimal solution. To maximize the result, optimization algorithms are run with the guidance and supervision of a human, usually an expert in the problem. Recent advances in deep reinforcement learning motivate interest in an artificial agent capable of learning to do the expert’s task. Specifically, we present a proximal policy optimization agent that learns to optimize in a real case study such as the modeling of the photo-fenton disinfection process, which involves a number of parameters that have to be adjusted to minimize the error of the model with respect to the experimental data collected in several trials. The expert spends an average of 4 h to find a suitable set of parameters. On the other hand, the agent we present does not require a human expert to guide or validate the optimization procedure and achieves similar results in $$2.5\times$$
2.5
×
less time.
“…This approach to optimization has also been applied to real-world problems and, in particular, to chemical industry, for example in real-time optimization of hydrocracking [19], in batch bio-process optimization for finding alternatives to fossil based materials [20], in batch optimization of bioreactors for food industry [21], in real-time detection of pollution risk due to wastewater [22] and in the analysis of material qualities like hardness of aluminum alloys [23]. It has also been applied in other domains such as health care, for melanoma's gene regulation [24] or protein folding problems in the fight against hereditary diseases [25], or in the field of energy, as in [26] to manage the electric power in a building or a small city, or in [27] to maximize electrical energy generation with acceptable emission levels.…”
Numerical optimization solves problems that are analytically intractable at the cost of arriving at a sufficiently good but rarely optimal solution. To maximize the result, optimization algorithms are run with the guidance and supervision of a human, usually an expert in the problem. Recent advances in deep reinforcement learning motivate interest in an artificial agent capable of learning to do the expert’s task. Specifically, we present a proximal policy optimization agent that learns to optimize in a real case study such as the modeling of the photo-fenton disinfection process, which involves a number of parameters that have to be adjusted to minimize the error of the model with respect to the experimental data collected in several trials. The expert spends an average of 4 h to find a suitable set of parameters. On the other hand, the agent we present does not require a human expert to guide or validate the optimization procedure and achieves similar results in $$2.5\times$$
2.5
×
less time.
“…A variety of data-driven models have been tested for this process, such as product yield prediction by artificial neural networks (ANN) [8], and convolutional neural networks (CNN) [9] are also trained for similar purposes. Other efforts include reinforcement learning [10], fuzzy theory [11], and deep belief networks [12] for optimization and quality prediction.…”
Hydrocracking is an energy-intensive process, and its control system aims at stable product specifications. When the main product is diesel, the quality measure is usually 95% of the true boiling point. Constant diesel quality is hard to achieve when the feed characteristics vary and feedback control has a long response time. This work suggests a feedforward model predictive control structure for an industrial hydrocracker. A state-space model, an autoregressive exogenous model, a support vector machine regression model, and a deep neural network model are tested in this structure. The resulting reactor temperature decisions and final diesel product quality values are compared against each other and against the actual measurements. The results show the importance of the feed character measurements. Significant improvements are shown in terms of product quality as well as energy savings through decreasing the heat duty of the preheating furnace.
“…Lawrence Ricker em 2015 implementou uma estimativa do custo operacional da planta em seu modelo no MATLAB/Simulink 1 . Esta estimativa considera o consumo das variáveis XMEAS (10,19,20,29,31,32,33,34,35,36,37,38,39) e XMV(8) (Figura 3.5).…”
M L Luz, Eric; Caarls, Wouter (Advisor). Study of reinforcement learning techniques applied to the control of chemical processes. Rio de Janeiro, 2021. 91p.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.