2020
DOI: 10.1109/tsg.2019.2933191
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive Power System Emergency Control Using Deep Reinforcement Learning

Abstract: Power system emergency control is generally regarded as the last safety net for grid security and resiliency. Existing emergency control schemes are usually designed off-line based on either the conceived "worst" case scenario or a few typical operation scenarios. These schemes are facing significant adaptiveness and robustness issues as increasing uncertainties and variations occur in modern electrical grids. To address these challenges, this paper developed novel adaptive emergency control schemes using deep… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
153
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 253 publications
(154 citation statements)
references
References 31 publications
0
153
0
Order By: Relevance
“…Unlike the optimization-based methods that obtain the intermediate results and final solutions as a whole, which are either based on day/hour-ahead framework or real-time framework (including two-stage framework), machine learning-based methods could sperate the training process of intermediate results and problem-solving process in different time horizons. For example, in the work [127], a deep reinforcement learning (DRL) and simulation-based framework with a pre-calculated neural network structure are used to deal with an adaptive power system emergency control problem. The reinforcement learning module, as a problem-solving engine, automatically determines its Q-network weight parameters via repeatedly simulated samples before the actual decision-making of a proactive strategy.…”
Section: B Machine Learning-based Proactive Strategies Of Power Systmentioning
confidence: 99%
“…Unlike the optimization-based methods that obtain the intermediate results and final solutions as a whole, which are either based on day/hour-ahead framework or real-time framework (including two-stage framework), machine learning-based methods could sperate the training process of intermediate results and problem-solving process in different time horizons. For example, in the work [127], a deep reinforcement learning (DRL) and simulation-based framework with a pre-calculated neural network structure are used to deal with an adaptive power system emergency control problem. The reinforcement learning module, as a problem-solving engine, automatically determines its Q-network weight parameters via repeatedly simulated samples before the actual decision-making of a proactive strategy.…”
Section: B Machine Learning-based Proactive Strategies Of Power Systmentioning
confidence: 99%
“…The aim of controls is to keep the system in synchronism (with angular velocities equal or very close to the nimnal value defined by the nominal system frequency). References [17,[58][59][60][61][62][63][64]73] dealt with transient instability control by controlling individual electric power system components such as thyristor-controlled series capacitor [17,58,59] and a dynamic brake (a resistor usually located near electricity generation plant) to absorb excess of electricity generation [60,61]. Q-learning was used in [17,[60][61][62] while [58,59] suggested fitted Q-iteration.…”
Section: Emergency Controlmentioning
confidence: 99%
“…Inclusion of state history to recover Markov property in partially observable problems was considered in [62]. A dynamic brake wa also considered in [73] for emergency control using DRL (DQN was an approach of the choice in [73]) largely following implementation details presented in [17,62]. The problem of transient angle instability was also considered within wide-area control systems [63,64].…”
Section: Emergency Controlmentioning
confidence: 99%
See 1 more Smart Citation
“…Unlike traditional reinforcement learning, the DRL algorithms use powerful deep neuron networks to approximate their value function (such as Q-table), enabling automatic high-dimensional feature extraction and end-toend learning. Recently, the advantages of DRL were recognized by the community and some attempts were made to leverage DRL in various applications for electrical grid, including operational control [21]- [24], electricity market [25], [26], demand response [27] and energy management [28]. Although these applications presented advantageous results in their respective fields, several challenges were encountered.…”
mentioning
confidence: 99%