The platform will undergo maintenance on Sep 14 at about 9:30 AM EST and will be unavailable for approximately 1 hour.
2022
DOI: 10.1016/j.egyr.2021.11.126
|View full text |Cite
|
Sign up to set email alerts
|

Design and tests of reinforcement-learning-based optimal power flow solution generator

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(4 citation statements)
references
References 11 publications
0
3
0
Order By: Relevance
“…Zhen et al [15] model the economic dispatch as 1-step Markov decision process (MDP), i.e., the solution is not generated iteratively, but in a one-shot fashion. They use the Twin-Delayed DDPG (TD3) algorithm to learn to minimize generation costs under multiple constraints.…”
Section: B Learning the Optimal Power Flowmentioning
confidence: 99%
See 1 more Smart Citation
“…Zhen et al [15] model the economic dispatch as 1-step Markov decision process (MDP), i.e., the solution is not generated iteratively, but in a one-shot fashion. They use the Twin-Delayed DDPG (TD3) algorithm to learn to minimize generation costs under multiple constraints.…”
Section: B Learning the Optimal Power Flowmentioning
confidence: 99%
“…Therefore, the agent has perfect knowledge of Q, if it knows the reward function. As Zhen et al [15] showed, the OPF approximation can be implemented as a 1-step environment, because the solution of one OPF is independent of the solution of the previous OPF. Exceptions are multi-step OPF problems where the optimization is done over multiple time steps, e.g., when storage systems are part of the optimization.…”
Section: Model-extended Maddpg (M-maddpg)mentioning
confidence: 99%
“…In the context of ACOPF, neural networks can either be trained by imitation (supervised learning) or by interaction with a simulator through Reinforcement Learning (RL) [7]. Recent work explores the application of deep neural networks to ACOPF [8], while others [9], [10], [11], [12] frame the ACOPF problem as a closed-loop RL problem.…”
Section: Background and Motivationsmentioning
confidence: 99%
“…After training, the AI-based agent can adjust power flow states rapidly and is suitable for online applications. An RLbased optimal power flow solution method has been proposed in [58] using PSOPS and the twin-delayed deep deterministic policy gradient (TD3) algorithm [59]. In this paper, a TD3-based SOPF solution program is realized using Py_PSOPS.…”
Section: ) Framework Designmentioning
confidence: 99%