2021
DOI: 10.48550/arxiv.2112.15187
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Stability-Preserving Automatic Tuning of PID Control with Reinforcement Learning

Abstract: PID control has been the dominant control strategy in the process industry due to its simplicity in design and effectiveness in controlling a wide range of processes. However, traditional methods on PID tuning often require extensive domain knowledge and field experience. To address the issue, this work proposes an automatic PID tuning framework based on reinforcement learning (RL), particularly the deterministic policy gradient (DPG) method. Different from existing studies on using RL for PID tuning, in this … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(5 citation statements)
references
References 14 publications
0
2
0
Order By: Relevance
“…PID parameters can be optimized using two main methods: classical tuning methods and optimization methods [8,9]. Classical methods often rely on heuristic tuning or model construction to address specific engineering challenges [10]. However, these methods, such as the Ziegler-Nichols method [11] and the Cohen-Coon method [12], make assumptions about the underlying model or attempt to estimate the best approximation of the PID parameters [13].…”
Section: Motivationmentioning
confidence: 99%
See 2 more Smart Citations
“…PID parameters can be optimized using two main methods: classical tuning methods and optimization methods [8,9]. Classical methods often rely on heuristic tuning or model construction to address specific engineering challenges [10]. However, these methods, such as the Ziegler-Nichols method [11] and the Cohen-Coon method [12], make assumptions about the underlying model or attempt to estimate the best approximation of the PID parameters [13].…”
Section: Motivationmentioning
confidence: 99%
“…Ref. [10] proposed an RL-based stabilitypreserving PID adaptive tuning framework that guarantees the controller's stability in the entire closed-loop process and found that this approach reduced the MSE tracking error by more than 10% compared with other methods. However, this method requires a known set of PID parameters as reference values to guide the agent's exploration.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Due to simplicity and ease of implementation, proportional-integral-derivative (PID) controllers have been extensively used in industrial processes over the past few decades, which consist of almost 90% of controllers in industrial control loops s (Al-Bargothi et al 2019). They capture the system historical behaviors through the integration part and forecast the future behavior of the system via the differentiation part (Lakhani et al 2021). At the same time, the advancement of modern science and technology improves the complexity of industrial processes, leading to potential problems such as unstable control loops (Shamsuzzoha 2018), which may contribute to the low effectiveness of PID tuning.…”
Section: Introductionmentioning
confidence: 99%
“…This method does not need too much advanced technical support, but it depends on the personal professional ability, which cannot ensure the accuracy and adaptability of tuning. Apart from manual tuning, PID tuning approaches are categorized into heuristic tuning, rule-based tuning, and model-based tuning (Lakhani et al 2021). Heuristic tuning is based on trial and error according to the prior knowledge on the control processes and its corresponding PID parameters, which is easy to implement but time-consuming.…”
Section: Introductionmentioning
confidence: 99%