We propose a framework for out-of-sample predictive ability testing and forecast selection designed for use in the realistic situation in which the forecasting model is possibly misspecified, due to unmodeled dynamics, unmodeled heterogeneity, incorrect functional form, or any combination of these. Relative to the existing literature (Diebold and Mariano (1995) and West (1996)), we introduce two main innovations: (i) We derive our tests in an environment where the finite sample properties of the estimators on which the forecasts may depend are preserved asymptotically. (ii) We accommodate conditional evaluation objectives (can we predict which forecast will be more accurate at a future date?), which nest unconditional objectives (which forecast was more accurate on average?), that have been the sole focus of previous literature. As a result of (i), our tests have several advantages: they capture the effect of estimation uncertainty on relative forecast performance, they can handle forecasts based on both nested and nonnested models, they allow the forecasts to be produced by general estimation methods, and they are easy to compute. Although both unconditional and conditional approaches are informative, conditioning can help fine-tune the forecast selection to current economic conditions. To this end, we propose a two-step decision rule that uses current information to select the best forecast for the future date of interest. We illustrate the usefulness of our approach by comparing forecasts from leading parameter-reduction methods for macroeconomic forecasting using a large number of predictors. Copyright The Econometric Society 2006.
SUMMARYWe propose new methods for comparing the out-of-sample forecasting performance of two competing models in the presence of possible instabilities. The main idea is to develop a measure of the relative local forecasting performance for the two models, and to investigate its stability over time by means of statistical tests. We propose two tests (the Fluctuation test and the One-Time Reversal test) that analyze the evolution of the models' relative performance over historical samples. In contrast to previous approaches to forecast comparison, which are based on measures of global performance, we focus on the entire time path of the models' relative performance, which may contain useful information that is lost when looking for the model that forecasts best on average. We apply our tests to the analysis of the time variation in the out-of-sample forecasting performance of monetary models of exchange rate determination relative to the random walk.
We propose an encompassing test for comparing conditional quantile forecasts in an out-of-sample framework. Our test provides a basis for forecast combination when encompassing is rejected. Its central features are (1) use of the "tick" loss function, (2) a conditional approach to out-of-sample evaluation, and (3) derivation in an environment with asymptotically nonvanishing estimation uncertainty. Our approach is valid under general conditions; the forecasts can be based on nested or nonnested models and can be obtained by general estimation procedures. We illustrate the test properties in a Monte Carlo experiment and apply it to evaluate and compare four popular value-at-risk models.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.