2020
DOI: 10.48550/arxiv.2012.12802
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Machine Learning Advances for Time Series Forecasting

Abstract: In this paper we survey the most recent advances in supervised machine learning and highdimensional models for time series forecasting. We consider both linear and nonlinear alternatives. Among the linear methods we pay special attention to penalized regressions and ensemble of models. The nonlinear methods considered in the paper include shallow and deep neural networks, in their feed-forward and recurrent versions, and tree-based methods, such as random forests and boosted trees. We also consider ensemble an… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
9
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(9 citation statements)
references
References 118 publications
0
9
0
Order By: Relevance
“…Here, we note that this convergence in distribution is a pointwise result, and is not guaranteed to hold uniformly with respect to the parameter vector (Leeb and Pötscher, 2005). Uniform inference for model selection in multivariate time series is however still a nascent area of research (Masini et al, 2020) and is beyond the scope of this paper.…”
Section: Asymptotic Resultsmentioning
confidence: 91%
“…Here, we note that this convergence in distribution is a pointwise result, and is not guaranteed to hold uniformly with respect to the parameter vector (Leeb and Pötscher, 2005). Uniform inference for model selection in multivariate time series is however still a nascent area of research (Masini et al, 2020) and is beyond the scope of this paper.…”
Section: Asymptotic Resultsmentioning
confidence: 91%
“…Babb and Detmeister (2017) provide a useful review of the literature on nonlinear Phillips curves. A fast-growing literature evaluates the use of machine learning techniques for macroeconomic forecasting, with random forests (see Breiman (2001) and, e.g., Masini, Medeiros, and Mendes (2021), for a survey) performing particularly well, also during crisis times, in a variety of studies and for key variables such as GDP growth and inflation; see, e.g., Goulet , , Goulet Coulombe, Marcellino, and Stevanovic (2021), .…”
Section: Introductionmentioning
confidence: 99%
“…Hence, due to the limited number of scenarios one can consider to maintain the tractability of the realistic models, we should expect a certain degree of overfit in in-sample and poor performances of LDRs in out-of-sample tests. This is a well-known fact from high-dimensional statistics and machine learning problems (we refer to [30,31,32,33] and [34] as a relevant and updated literature on the subject), where, in some cases, the number of parameters exceed the number of scenarios.…”
Section: Introductionmentioning
confidence: 99%
“…Interestingly, when applied to a MSLP, the LDR identification problem resembles the estimation of a linear regression model, Y = X θ +ε, based on a loss function, l(X, Y ; θ), related to the specific objective function of the problem (see [33]). In this context, the overfitting issue and the related poor out-of-sample performance of non-parsimonious models (with the number of coefficients approaching or exceeding the number of in-sample observations) have been addressed by the use of regularization procedures [34]. In particular, the adaptive least absolute shrinkage and selection operator (AdaLASSO) has been playing a key role in regression and statistical modeling literature [30,32,35].…”
Section: Introductionmentioning
confidence: 99%