2016
DOI: 10.1016/j.epsr.2016.06.041
|View full text |Cite
|
Sign up to set email alerts
|

Reinforcement learning approach for congestion management and cascading failure prevention with experimental application

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
7
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 28 publications
(5 citation statements)
references
References 20 publications
0
5
0
Order By: Relevance
“…Q-learning was suggested in [57] to determine optimal control of active power generations for preventing cascading failure and blackout in smart grids. This approach belongs to subsystem level controls and considers single line outages (termed N-1 contingency in power system literature) and two consecutive line outages (termed N-1-1).…”
Section: Preventive Controlmentioning
confidence: 99%
“…Q-learning was suggested in [57] to determine optimal control of active power generations for preventing cascading failure and blackout in smart grids. This approach belongs to subsystem level controls and considers single line outages (termed N-1 contingency in power system literature) and two consecutive line outages (termed N-1-1).…”
Section: Preventive Controlmentioning
confidence: 99%
“…It is shown in literature that, the WAM temporal information can further be used in WAC designs to perform real-time transient stability enhancement, which can improve the power transfer capability of a transmission system and prevent the system from generation or load disconnection, or catastrophic failure following a sequence of disturbances in the system. Article [96] have used the RL for preventing cascading failure (CF) and blackout in smart grids by acting on the output power of the generators in real-time. This article makes use of the state-action policy update feature of RL algorithm, as it can learn from interactions with the system.…”
Section: F Transient Stability Enhancement Controllermentioning
confidence: 99%
“…DRL-based implementations adapted for preventive control found in the literature are few and in [13], the authors argue that the reason may be because these control problems have traditionally been formulated as static optimization problems. One example of an RL-based implementation for preventive control is presented in [21], which aimed to determine the optimal control of active power generation for preventing cascading failures and blackouts. However, the method was based on a tabular form of Q-learning, which in general is not suited to handle large state and action spaces.…”
Section: Introductionmentioning
confidence: 99%