2007
DOI: 10.1016/j.engappai.2006.10.009
|View full text |Cite
|
Sign up to set email alerts
|

Model-free learning control of neutralization processes using reinforcement learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
22
0

Year Published

2008
2008
2021
2021

Publication Types

Select...
4
4
1

Relationship

1
8

Authors

Journals

citations
Cited by 59 publications
(22 citation statements)
references
References 21 publications
(25 reference statements)
0
22
0
Order By: Relevance
“…Since MFLC allows a tolerance error of the process whenever the pH is within the control band, the control signal is very smooth when the pH is closer or within the control band, even if some exploration is carried out. The detailed discussion of the application MFLC to pH control is given in Syafiie et al (2007a).…”
Section: B) Online Experimental Results and Discussionmentioning
confidence: 99%
“…Since MFLC allows a tolerance error of the process whenever the pH is within the control band, the control signal is very smooth when the pH is closer or within the control band, even if some exploration is carried out. The detailed discussion of the application MFLC to pH control is given in Syafiie et al (2007a).…”
Section: B) Online Experimental Results and Discussionmentioning
confidence: 99%
“…A fuzzy control algorithm combined with Takagi-Sugeno (T-S) fuzzy model (Pishvaie & Shahrokhi, 2006), a fuzzy gain scheduled control scheme (Regunath & Kadirkamanathan, 2001), a fuzzy self-tuning PI control (Babuska, Oosterhoff, Oudshoorn, & Bruijn, 2002), fuzzy sliding mode control (Shahraz & Boozarjomehry, 2009), fuzzy internal model control (Edgar & Postlethwaite, 2000;Han, Han, & Guo, 2006), neural networks and adaptive controller (Krishnapura & Jutan, 2000), PID controller using linearization through neural networks (Chen & Huang, 2004), a linear internal model controller optimized by a genetic algorithm (Mwembeshia, Kenta, & Salhi, 2004), a multiple model predictive controller based on T-S fuzzy model (He, Cai, & Li, 2005), iterative nonlinear model predictive control (Cueli & Bordons, 2008) and a model-free learning control based on reinforcement learning algorithms (Syafiiea, Tadeoa, & Martinez, 2007) have been reported for pH control. Some other works in this category can be found in the literature (Karr & Gentry, 1993;Loh, Looi, & Fong, 1995;Qin & Borders, 1994).…”
Section: Article In Pressmentioning
confidence: 99%
“…Reinforcement learning (RL) is a popular algorithm for determining a policy that optimizes a user‐defined cost function through interactions with an environment 1‐7 . There are several applications of RL in chemical process engineering, such as a model‐free learning controller for pH control, 8 a policy gradient approach for batch bioprocess systems, 9 and approximate dynamic programming (ADP) for control of fed‐batch bioreactors, 10 control of proppant concentrations inside a fracture for hydraulic fracturing, 11 and control of alkali‐surfactant‐polymer flooding for oil recovery 12 . However, compared with the computer science field that can put aside the stability and mainly focus on optimality such as winning games or maximizing rewards, 13 RL applications in chemical processes have not become prevalent yet.…”
Section: Introductionmentioning
confidence: 99%