Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
Fourth International Conference on Hybrid Intelligent Systems (HIS'04)
DOI: 10.1109/ichis.2004.86
|View full text |Cite
|
Sign up to set email alerts
|

Selection of Time Series Forecasting Models based on Performance Information

Abstract: In this work, we proposed to use the Zoomed Ranking approach to rank and select time series models. Zoomed Ranking, originally proposed to generate a ranking of candidate algorithms, is employed to solve a given classification problem based on performance information from previous problems. The problem of model selection in Zoomed Ranking was solved in two distinct phases. In the first phase, we selected a subset of problems from the instances base that were similar to the new problem at hand. This selection i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 8 publications
(4 citation statements)
references
References 13 publications
0
4
0
Order By: Relevance
“…Beyond these general-purpose meta-features, many more specific ones were formulated. For streaming data one can use streaming landmarks , for time series data one can compute autocorrelation coefficients or the slope of regression models (Arinze, 1994;dos Santos et al, 2004), and for unsupervised problems one can cluster the data in different ways and extract properties of these clusters . In many applications, domain-specific information can be leveraged as well (Smith-Miles, 2009;Olier et al, 2018).…”
Section: Meta-featuresmentioning
confidence: 99%
See 1 more Smart Citation
“…Beyond these general-purpose meta-features, many more specific ones were formulated. For streaming data one can use streaming landmarks , for time series data one can compute autocorrelation coefficients or the slope of regression models (Arinze, 1994;dos Santos et al, 2004), and for unsupervised problems one can cluster the data in different ways and extract properties of these clusters . In many applications, domain-specific information can be leveraged as well (Smith-Miles, 2009;Olier et al, 2018).…”
Section: Meta-featuresmentioning
confidence: 99%
“…Meta-models can also generate a ranking of the top-K most promising configurations. One approach is to build a k-nearest neighbor (kNN) meta-model to predict which tasks are similar, and then rank the best configurations on these similar tasks (Brazdil et al, 2003b;dos Santos et al, 2004). This is similar to the work discussed in Section 3.3, but without ties to a follow-up optimization approach.…”
Section: Rankingmentioning
confidence: 99%
“…A newer approach that facilitates combinations on a higher level, the meta learning level, is presented in [7] and applied to time series forecasting in [31]. It allows taking relations of individual performances into account by providing a ranking of methods for a particular problem.…”
Section: Experiments Two -Comparing Meta-learning Approachesmentioning
confidence: 99%
“…The ranking is then generated by a variation of the Adjust Ratio of Ratios (ARR), which is applied in a classification context in the original paper [7] and extended by a penalty for time intensity in [31], however, in this experiment, the time dimension was discarded and the SMAPE measure was used instead of classifier success rates to adapt the ranking to regression problems. The pairwise ARR for models m p and m q on series s i is…”
Section: Experiments Two -Comparing Meta-learning Approachesmentioning
confidence: 99%