2020
DOI: 10.1109/tste.2019.2926147
|View full text |Cite
|
Sign up to set email alerts
|

Short-Term Wind Speed Interval Prediction Based on Ensemble GRU Model

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
56
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3

Relationship

3
4

Authors

Journals

citations
Cited by 182 publications
(56 citation statements)
references
References 36 publications
0
56
0
Order By: Relevance
“…In addition, long short-term memory (LSTM) network is a variant of RNN, which overcomes the gradient disappearance and gradient explosion of RNN, therefore, the LSTM adopted by Liu et al performed more prominently on long sequences [18]. Other DNN baseline models were trained with abundant samples, such as gate recurrent unit (GRU) [19] and bidirectional recurrent neural network (Bi-RNN) [20], however their results implied that it is not a promising realisation due to the ease of overfitting. In the process of STLF development, single machine learning models had difficulty meeting load accuracy requirements and a few hybrid preprocessing methods were mixed into them.…”
Section: Introductionmentioning
confidence: 99%
“…In addition, long short-term memory (LSTM) network is a variant of RNN, which overcomes the gradient disappearance and gradient explosion of RNN, therefore, the LSTM adopted by Liu et al performed more prominently on long sequences [18]. Other DNN baseline models were trained with abundant samples, such as gate recurrent unit (GRU) [19] and bidirectional recurrent neural network (Bi-RNN) [20], however their results implied that it is not a promising realisation due to the ease of overfitting. In the process of STLF development, single machine learning models had difficulty meeting load accuracy requirements and a few hybrid preprocessing methods were mixed into them.…”
Section: Introductionmentioning
confidence: 99%
“…Comparatively, the GD II model is 32.78% higher than GD I model, and it can be adjusted by the error correction mechanism 5 . Also, the GD III model is 5.34% lower than the GD II model, indicating that Adagrad has a certain negative effect on the deviation compared with the RMSprop training algorithm.…”
Section: Methodsmentioning
confidence: 94%
“…The prediction results can be in the form of a point, interval, 5 or probability interval 6 . It is currently believed that the outcomes of a time‐series prediction are better in sets of data rather than a single point value.…”
Section: Introductionmentioning
confidence: 99%
“…For experiments we divide the wind series data into four subseries and evaluate the model performance in each of them. This is similar to performing a cross-validation to assure better generalization capabilities of the model [32]. Each subseries is extracted using a sliding window of 15 months, of which the first twelve months' data is considered for training the model while the remaining three months' data is kept for testing purpose, e.g., first subseries consists of data from January 2011 to December 2011 as train data and from January 2012 to March 2012 as test data, second subseries from April 2011 to March 2012 as train data and from April 2012 to June 2012 as test data and so on.…”
Section: Experiments On Real Datasets a Dataset And Methodologiesmentioning
confidence: 99%