2019
DOI: 10.19101/ijacr.pid77
|View full text |Cite
|
Sign up to set email alerts
|

Determining the impact of window length on time series forecasting using deep learning

Abstract: Time series forecasting is a method of predicting the future based on previous observations. It depends on the values of the same variable, but at different time periods. To date, various models have been used in stock market time series forecasting, in particular using deep learning models. However, existing implementations of the models did not determine the suitable number of previous observations, that is the window length. Hence, this study investigates the impact of window length of long short-term memor… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 9 publications
(6 citation statements)
references
References 27 publications
(27 reference statements)
0
3
0
Order By: Relevance
“…Some of these algorithms were based on gated recurrent neural networks (RNN), autoencoders, convolutional neural networks, bidirectional mechanisms, attention mechanisms, ensemble techniques, deep and vanilla architectures. Specific architectural design features of these 12 selected algorithms were: the gated LSTM architecture suggested [13], [14] and [15]; the bidirectional mechanism combined with both LSTMs and GRUs influenced [16]; the attention mechanism combined with gated neural networks [17]; deep convolutional neural network (CNN) ensemble with LSTM and an attention mechanism [18]; a GRU [19] autoencoders combined with LSTM [20] and finally a deep gated recurrent neural network architectures made up of both GRU and LSTM [21]. iii Evaluation-The following factors were considered as potential performance evaluation criteria with specific metrics: complexity measure through the total number of built parameters of every architecture, accuracy considered a mean absolute error (MAE) which is robust in environments associated with discrete irregular patterns when measuring the average magnitude of the errors in a set of predictions, without considering their direction.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Some of these algorithms were based on gated recurrent neural networks (RNN), autoencoders, convolutional neural networks, bidirectional mechanisms, attention mechanisms, ensemble techniques, deep and vanilla architectures. Specific architectural design features of these 12 selected algorithms were: the gated LSTM architecture suggested [13], [14] and [15]; the bidirectional mechanism combined with both LSTMs and GRUs influenced [16]; the attention mechanism combined with gated neural networks [17]; deep convolutional neural network (CNN) ensemble with LSTM and an attention mechanism [18]; a GRU [19] autoencoders combined with LSTM [20] and finally a deep gated recurrent neural network architectures made up of both GRU and LSTM [21]. iii Evaluation-The following factors were considered as potential performance evaluation criteria with specific metrics: complexity measure through the total number of built parameters of every architecture, accuracy considered a mean absolute error (MAE) which is robust in environments associated with discrete irregular patterns when measuring the average magnitude of the errors in a set of predictions, without considering their direction.…”
Section: Resultsmentioning
confidence: 99%
“…On the other hand, models and algorithms designed through gated sequential architectures in the form of LSTMs and GRUs have been widely used in such analysis environments [13], [14], [15]. Thus the guidance from the SeLFISA framework will influence the development of deep learning with artefacts that may demonstrate better performance over these suggested gated models.…”
Section: B Algorithmsmentioning
confidence: 99%
“…If it is too small, the model will study the data in too much detail so that overfitting can occur. If it is too large, it will be difficult for the model to study the data because there is too much noise that interferes with the accuracy of the model [2].…”
Section: ) Lookback Window Sizementioning
confidence: 99%
“…It involves dividing the input time series data into smaller windows, which are then fed into the LSTM network. This approach enables the model to capture temporal dependencies and patterns within the data, allowing it to make more accurate predictions for time series forecasting tasks [33].…”
Section: Window Lengthmentioning
confidence: 99%