2021
DOI: 10.1016/j.neucom.2020.03.070
|View full text |Cite
|
Sign up to set email alerts
|

Active disturbance rejection controller for multi-area interconnected power system based on reinforcement learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
20
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2

Relationship

4
3

Authors

Journals

citations
Cited by 32 publications
(20 citation statements)
references
References 21 publications
0
20
0
Order By: Relevance
“…where r is the instant reward value obtained by performing an action in RL, and the design of r in this article is shown in Equation (28).…”
Section: Working Process Of Double Dqnmentioning
confidence: 99%
See 1 more Smart Citation
“…where r is the instant reward value obtained by performing an action in RL, and the design of r in this article is shown in Equation (28).…”
Section: Working Process Of Double Dqnmentioning
confidence: 99%
“…For example, Chen et al 27 used LADRC to control the heading angle of the ship without considering dynamics and used Q learning to optimize the controller parameters. Zheng et al 28 applied the Q-learning-optimized LADRC to the power system and got a good control effect. However, in practice, in the optimization process of Q learning, the system state must be discretized, which may generate a large state space and is inconvenient for the storage and calculation of the Q table.…”
Section: Introductionmentioning
confidence: 99%
“…For example, Rahman et al [17] compared the control effects of ADRC and PID for an LFC system, and the simulation results showed that ADRC is a powerful substitute for PID and has significant performance advantages for LFC. Zheng et al [18] applied ADRC to a three-area interconnected power system both in regulated and deregulated environments. It is worth mentioning that most of the controllers in ADRC currently designed for LFC use linear ADRC (LADRC), which was proposed by Gao [19], where the internal structures of the ADRC method have been greatly simplified.…”
Section: Introductionmentioning
confidence: 99%
“…Recently, research has been conducted on reinforcement learning to adjust controller parameters. For example, Chen [27] and Zheng [28] applied Q learning to parameter tuning of LADRC. Nevertheless, Q learning is more suitable for systems with limited states and limited actions.…”
Section: Introductionmentioning
confidence: 99%