2023
DOI: 10.1007/s10618-023-00984-y
|View full text |Cite
|
Sign up to set email alerts
|

Multiple-input neural networks for time series forecasting incorporating historical and prospective context

João Palet,
Vasco Manquinho,
Rui Henriques

Abstract: Individual and societal systems are open systems continuously affected by their situational context. In recent years, context sources have been increasingly considered in different domains to aid short and long-term forecasts of systems’ behavior. Nevertheless, available research generally disregards the role of prospective context, such as calendrical planning or weather forecasts. This work proposes a multiple-input neural architecture consisting of a sequential composition of long short-term memory units or… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 45 publications
0
2
0
Order By: Relevance
“…The second limitation is that we did not incorporate information related to exogenous variables beyond temperatures, such as wind or seismic movements happened in the past. Following the proposal of Palet et al (2023), an arbitrarily-high number of historical context variables could be integrated in the model to guide the learning task. Currently, only partial information regarding these additional variables is available.…”
Section: Discussionmentioning
confidence: 99%
“…The second limitation is that we did not incorporate information related to exogenous variables beyond temperatures, such as wind or seismic movements happened in the past. Following the proposal of Palet et al (2023), an arbitrarily-high number of historical context variables could be integrated in the model to guide the learning task. Currently, only partial information regarding these additional variables is available.…”
Section: Discussionmentioning
confidence: 99%
“…Furthermore, some research introduces covariates into feature learning to enhance adaptability to sample characteristics. In study [16], the architecture is based on long short-term memory (LSTM) networks, which incorporate structured covariates into the framework. The paper explores different methods of integrating covariates, including using them as inputs to fully connected layers, incorporating them into the gates, or combining them with LSTM outputs in the final fully connected layer.…”
Section: Introductionmentioning
confidence: 99%