2019
DOI: 10.3390/en12061140
|View full text |Cite
|
Sign up to set email alerts
|

Short-Term Electricity Load Forecasting Model Based on EMD-GRU with Feature Selection

Abstract: Many factors affect short-term electric load, and the superposition of these factors leads to it being non-linear and non-stationary. Separating different load components from the original load series can help to improve the accuracy of prediction, but the direct modeling and predicting of the decomposed time series components will give rise to multiple random errors and increase the workload of prediction. This paper proposes a short-term electricity load forecasting model based on an empirical mode decomposi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
38
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
8
2

Relationship

0
10

Authors

Journals

citations
Cited by 101 publications
(52 citation statements)
references
References 36 publications
(40 reference statements)
0
38
0
Order By: Relevance
“…GRU is a variant of LSTM with a gated recurrent neural network structure, and comparing with LSTM, there are two gates (update gate and reset gate) in GRU and three gates (forgetting gate, input gate, and output gate) in LSTM; meanwhile, GRU has fewer training parameters than LSTM, so GRU converges quicker than LSTM during training [34].…”
Section: The Gru-cnn Hybrid Neuralmentioning
confidence: 99%
“…GRU is a variant of LSTM with a gated recurrent neural network structure, and comparing with LSTM, there are two gates (update gate and reset gate) in GRU and three gates (forgetting gate, input gate, and output gate) in LSTM; meanwhile, GRU has fewer training parameters than LSTM, so GRU converges quicker than LSTM during training [34].…”
Section: The Gru-cnn Hybrid Neuralmentioning
confidence: 99%
“…Pearson's correlation coefficient can deal with large datasets and has low complexity and strong generality, making it feasible for feature selection of the input dataset [63]. This method utilizes correlation indicators to determine the appropriate inputs and filters the inputs having scores greater than the threshold [64]. An autocorrelation function (ACF) and partial autocorrelation function (PACF) with twelve lags applied to the observed runoff series are shown in Fig.…”
Section: Input Selection and Model Developmentmentioning
confidence: 99%
“…GRU: GRU is another type of RNN that eliminates a separate storage unit [46]. There is no conclusive result in comparing performance between LSTM and GRU, and the performances of the LSTM and GRU depend on the task and dataset [47]. In this paper, a GRU network with 3 GRU layers and 1 dense layer is adopted for comparison [35].…”
Section: ) Comparison With State-of-the-art Dnnsmentioning
confidence: 99%