2021 6th Asia Conference on Power and Electrical Engineering (ACPEE) 2021
DOI: 10.1109/acpee51499.2021.9437089
|View full text |Cite
|
Sign up to set email alerts
|

Electric Power System Transient Stability Assessment Based on Bi-LSTM Attention Mechanism

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 13 publications
0
3
0
Order By: Relevance
“…The power system TSA problem has been tackled by means of different deep learning architectures, to name here only a few prominent ones: convolutional neural networks (CNN) [56][57][58], recurrent neural networks (RNNs) [59], networks employing long shortterm memory (LSTM) layers [17,49,[60][61][62][63][64] or gated recurrent unit (GRU) layers [65][66][67], generative adversarial networks (GANs) [9,68,69], transfer learning [70], and autoencoders. These basic architectures can internally vary widely in the number of layers and their stacking order, which allows experimenting with different deep network topologies.…”
Section: Deep Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…The power system TSA problem has been tackled by means of different deep learning architectures, to name here only a few prominent ones: convolutional neural networks (CNN) [56][57][58], recurrent neural networks (RNNs) [59], networks employing long shortterm memory (LSTM) layers [17,49,[60][61][62][63][64] or gated recurrent unit (GRU) layers [65][66][67], generative adversarial networks (GANs) [9,68,69], transfer learning [70], and autoencoders. These basic architectures can internally vary widely in the number of layers and their stacking order, which allows experimenting with different deep network topologies.…”
Section: Deep Learningmentioning
confidence: 99%
“…Huang et al proposed multi-graph attention network in [20]. Mahato et al in [61] introduced a particular Bi-LSTM attention mechanism to the TSA problem, which featured LSTM layers. Wu et al proposed a deep belief network in [75].…”
Section: Deep Learningmentioning
confidence: 99%
“…Since the selection probability is difficult to quantify, which increases the difficulty of model training, in this paper, we choose to use a soft attention mechanism. Combined with the sequence data information, the input information is calculated as a weighted average, and then input into the network for training, which can effectively improve the attention of the model to the input information, and achieve a reasonable allocation of resources, which is suitable for predicting temperature sequence data in this paper [32]. The flow structure of the soft attention mechanism is shown in Figure 9.…”
Section: Attentional Mechanismsmentioning
confidence: 99%
“…Additionally, the gated recurrent unit model is developed to process the time-adaptive TSA (Chen and Wang, 2021). Moreover, an attention mechanism is introduced and combined with bidirectional (Bi)-LSTM to extract more robust features (Mahato et al, 2021). Nevertheless, a power grid inherently exhibits a structured graph representation, where buses are modeled as nodes and transmission lines are modeled as edges (Ishizaki et al, 2018).…”
Section: Introductionmentioning
confidence: 99%