2022
DOI: 10.1016/j.eswa.2022.117689
|View full text |Cite
|
Sign up to set email alerts
|

A novel short receptive field based dilated causal convolutional network integrated with Bidirectional LSTM for short-term load forecasting

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 31 publications
(13 citation statements)
references
References 43 publications
0
13
0
Order By: Relevance
“…Dilated convolution is a phenomenal progress made in CNN. Inevitably the dilated causal convolution will deepen the neural network . In a 1-D sequence with input x double-struckR n , and filter normalf : { 0 , ... , k 1 } double-struckR the dilated causal convolution ( D c ) on an element s of the sequence is expressed as given below. D c ( s ) = ( x * d f ) ( s ) = prefix∑ i = 0 k 1 normalf ( i ) . x s d i d is the dilation factor of the residual blocks in the stack.…”
Section: Proposed Tcn Methodologymentioning
confidence: 99%
See 1 more Smart Citation
“…Dilated convolution is a phenomenal progress made in CNN. Inevitably the dilated causal convolution will deepen the neural network . In a 1-D sequence with input x double-struckR n , and filter normalf : { 0 , ... , k 1 } double-struckR the dilated causal convolution ( D c ) on an element s of the sequence is expressed as given below. D c ( s ) = ( x * d f ) ( s ) = prefix∑ i = 0 k 1 normalf ( i ) . x s d i d is the dilation factor of the residual blocks in the stack.…”
Section: Proposed Tcn Methodologymentioning
confidence: 99%
“…Inevitably the dilated causal convolution will deepen the neural network. 32 In a 1-D sequence with input x n , and filter…”
Section: Proposed Tcn Methodologymentioning
confidence: 99%
“…Subsequently, the contribution of these variables to the prediction accuracy improvement was examined by trial and error, sometimes leading to shorter input sets for some of the horizons. Alternatively, other approaches, such as gradient boosting decision tree and Pearson correlation coefficient [ 102 ], attention mechanism [ 103 ], or Exploratory Data Analysis [ 104 ], are considered to have an effective contribution during input features reduction and selection. However, it is important to note that for each horizon, inputs remain the same for all machine-learning methods used in the present study.…”
Section: Case Studymentioning
confidence: 99%
“…Another similar model CNN-BiGRU was adopted by Xuan et al, [46], which gives a higher prediction as GRU has lower parameters affecting the prediction than LSTM. To deal with the limitations of multivariate load forecasting issues in LSTM, Javed et al [73] proposed a hybrid SRDCC-BiLSTM with improved generalization capability for multi-step and multivariate STLF. Compared to CNN-LSTM, the proposed approach exhibits 35% greater accuracy and can capture local trends in electrical load patterns.…”
Section: B: Recurrent Neural Network (Rnn Models)mentioning
confidence: 99%