Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
2010
DOI: 10.1016/j.neucom.2009.09.020
|View full text |Cite
|
Sign up to set email alerts
|

Meta-learning for time series forecasting and forecast combination

Abstract: In research of time series forecasting, a lot of uncertainty is still related to the task of selecting an appropriate forecasting method for a problem. It is not only the individual algorithms that are available in great quantities; combination approaches have been equally popular in the last decades. Alone the question of whether to choose the most promising individual method or a combination is not straightforward to answer. Usually, expert knowledge is needed to make an informed decision, however, in many c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
147
0

Year Published

2013
2013
2023
2023

Publication Types

Select...
4
4

Relationship

1
7

Authors

Journals

citations
Cited by 213 publications
(148 citation statements)
references
References 40 publications
1
147
0
Order By: Relevance
“…They have shown that a meta-learning system outperforms approaches representing competition entries in any category. On NN5 competition dataset their Pooling meta-learning had SMAPE of 25.7 which is lower than 26.5 obtained by Structural model, the best performing of 15 single algorithms (Lemke & Gabrys, 2010). If an approach with performance close to or better than the meta-learning system is found, many meta-learning approaches can include those candidates thus becoming better.…”
Section: Meta-learningmentioning
confidence: 82%
“…They have shown that a meta-learning system outperforms approaches representing competition entries in any category. On NN5 competition dataset their Pooling meta-learning had SMAPE of 25.7 which is lower than 26.5 obtained by Structural model, the best performing of 15 single algorithms (Lemke & Gabrys, 2010). If an approach with performance close to or better than the meta-learning system is found, many meta-learning approaches can include those candidates thus becoming better.…”
Section: Meta-learningmentioning
confidence: 82%
“…An Elman ANN or EANN is a recurrent neural network that fundamentally differs from the traditional FANN through including an additional context layer and feedback connections [29,30]. During network operations, the outputs of the hidden layer are again fed back to the context layer at each step, so that the context layer can keep track of the previous processing information.…”
Section: The Eann Modelmentioning
confidence: 99%
“…In this manner, the model with more error receives less weight and vice versa. Ordinary Least Square (OLS) is another popular method that determines the combining weights by minimizing the total Sum of Squared Error (SSE) [5,20,29]. Bunn's outperformance method [10] adopts a Bayesian framework of subjective probabilities and assigns the weight to a model on the basis of the number of times it performed best in the past in-sample forecasting trials.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Hence automatic model selection has been attempted in different ways, e.g. active testing [7], meta-learning [8] and information theory [9]. A common theme in the literature is comparison of different models using data always pre-processed in the same way.…”
Section: Introductionmentioning
confidence: 99%