2020
DOI: 10.1016/j.ijforecast.2019.04.014
|View full text |Cite
|
Sign up to set email alerts
|

The M4 Competition: 100,000 time series and 61 forecasting methods

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

11
373
2
3

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3

Relationship

2
6

Authors

Journals

citations
Cited by 585 publications
(455 citation statements)
references
References 68 publications
11
373
2
3
Order By: Relevance
“…We produce forecasts using models from the exponential smoothing family [5,6]. This family has shown good forecast accuracy over several forecasting competitions [7][8][9] and is especially suitable for short series. Exponential smoothing models can capture a variety of trend and seasonal forecasting patterns (such as additive or multiplicative) and combinations of those.…”
Section: Analysis and Forecastingmentioning
confidence: 99%
“…We produce forecasts using models from the exponential smoothing family [5,6]. This family has shown good forecast accuracy over several forecasting competitions [7][8][9] and is especially suitable for short series. Exponential smoothing models can capture a variety of trend and seasonal forecasting patterns (such as additive or multiplicative) and combinations of those.…”
Section: Analysis and Forecastingmentioning
confidence: 99%
“…This table features the authors of each method, with their affiliation and finally the percentage improvement over the comb baseline towards the two under-evaluation metrics (SMAPE and OWA). The comb baseline method is the simple arithmetic average of Seasonal Exponential Smoothing (SES), Holt, and Damped exponential smoothing and was used as the single benchmark for evaluating all other methods [12]. Table 2 summarizes information related to a number of forecasting attributes that each method implements.…”
Section: The M4 Competition Resultsmentioning
confidence: 99%
“…where Y t is the actual value of the time series at the t-th time interval, n the number of time periods or observations,Ŷ t the estimated forecast, h the forecasting horizon (test set length), and m is the length of the seasonal periodicity (i.e., twelve for monthly, four for quarterly, 24 for hourly and one for yearly, weekly, and daily data) [10][11][12].…”
Section: Introductionmentioning
confidence: 99%
“…Regardless of the plethora of possible optimal solutions, one common theme seems to be that the statistical approach considered in this paper performs on par (if not better) when the noise intensity is very high [23]. When this is doubled with the fact that LR is multiple times faster than the ML approaches that we examined, the choice of a forecasting model under perfectly imperfect weather forecasts becomes a no-brainer [51,54].…”
Section: Discussionmentioning
confidence: 96%
“…where U ij and L ij are the upper and lower bounds computed for the jth observation of group i, respectively, and a = 0.05 (95% confidence). Note that MIS evaluates prediction intervals taking into consideration both their coverage, i.e., the percentage of times when the true values lie inside the prediction intervals, and their spread, i.e., the distance between the upper and lower bounds [51]. Thus, in order for a prediction interval to be effective, it must provide the nominal coverage with the minimum possible width [52].…”
mentioning
confidence: 99%