This paper studies the model selection problem in a large class of causal time series models, which includes both the ARMA or AR(∞) processes, as well as the GARCH or ARCH(∞), APARCH, ARMA-GARCH and many others processes. To tackle this issue, we consider a penalized contrast based on the quasi-likelihood of the model. We provide sufficient conditions for the penalty term to ensure the consistency of the proposed procedure as well as the consistency and the asymptotic normality of the quasi-maximum likelihood estimator of the chosen model. We also propose a tool for diagnosing the goodness-of-fit of the chosen model based on a Portmanteau test. Monte-Carlo experiments and numerical applications on illustrative examples are performed to highlight the obtained asymptotic results. Moreover, using a data-driven choice of the penalty, they show the practical efficiency of this new model selection procedure and Portemanteau test.
This paper studies the model selection problem in a large class of causal time series models, which includes both the ARMA or AR(∞) processes, as well as the GARCH or ARCH(∞), APARCH, ARMA-GARCH and many others processes. We first study the asymptotic behavior of the ideal penalty that minimizes the risk induced by a quasi-likelihood estimation among a finite family of models containing the true model. Then, we provide general conditions on the penalty term for obtaining the consistency and efficiency properties. We notably prove that consistent model selection criteria outperform classical AIC criterion in terms of efficiency. Finally, we derive from a Bayesian approach the usual BIC criterion, and by keeping all the second order terms of the Laplace approximation, a data-driven criterion denoted KC'. Monte-Carlo experiments exhibit the obtained asymptotic results and show that KC' criterion does better than the AIC and BIC ones in terms of consistency and efficiency. CONTENTS 1. Introduction. Model selection is one of the fundamental tasks in Statistics and Data Science. It aims at providing a model (or an algorithm) that is the best, following a criterion, to represent observed data. Two leading model selection procedures have received a lot of attention in the literature. On one hand, the resampling methods such as hold out or more generally V -fold cross-validation are widely used in the machine learning community. On the other hand, the methods based on the minimization of a penalized risk are also now very
This paper is about the one-step ahead prediction of the future of observations drawn from an infinite-order autoregressive AR(∞) process. It aims to design penalties (fully data driven) ensuring that the selected model verifies the efficiency property but in the non asymptotic framework. We show that the excess risk of the selected estimator enjoys the best bias-variance trade-off over the considered collection. To achieve these results, we needed to overcome the dependence difficulties by following a classical approach which consists in restricting to a set where the empirical covariance matrix is equivalent to the theoretical one. We show that this event happens with probability larger than 1 − c 0 /n 2 with c 0 > 0. The proposed data driven criteria are based on the minimization of the penalized criterion akin to the Mallows's C p .
This paper studies the model selection problem in a large class of causal time series models, which includes both the ARMA or AR(∞) processes, as well as the GARCH or ARCH(∞), APARCH, ARMA-GARCH and many others processes. To tackle this issue, we consider a penalized contrast based on the quasi-likelihood of the model. We provide sufficient conditions for the penalty term to ensure the consistency of the proposed procedure as well as the consistency and the asymptotic normality of the quasi-maximum likelihood estimator of the chosen model. It appears from these conditions that the Bayesian Information Criterion (BIC) does not always guarantee the consistency. We also propose a tool for diagnosing the goodness-of-fit of the chosen model based on the portmanteau Test. Numerical simulations and an illustrative example on the FTSE index are performed to highlight the obtained asymptotic results, including a numerical evidence of the non consistency of the usual BIC penalty for order selection of an AR(p) models with ARCH(∞) errors.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.