We consider fixed-smoothing asymptotics for the Diebold and Mariano (Journal of Business and Economic Statistics, 1995, 13(3), 253-263) test of predictive accuracy. We show that this approach delivers predictive accuracy tests that are correctly sized even when only a small number of out-of-sample observations is available. We apply the fixed-smoothing asymptotics to the Diebold and Mariano test to evaluate the predictive accuracy of the Survey of Professional Forecasters (SPF) and of the European Central Bank Survey of Professional Forecasters (ECB SPF) against a simple random walk. Our results show that the predictive abilities of the SPF and of the ECB SPF were partially spurious.
INTRODUCTIONGood forecasts are key to good decision making, and being able to compare predictive accuracy is key to discriminating between good and bad forecasts. To this end, one of the most used tests to compare the predictive accuracy of two competing forecasts is the Diebold and Mariano (1995; DM) test.The DM test is based on a loss function associated with the forecast errors of each forecast, testing the null hypothesis of zero expected loss differential between two competing forecasts. This framework allows us to test for equal predictive accuracy using any loss function, and the test statistic is valid for contemporaneously correlated, serially correlated, and nonnormal forecast errors.The DM approach takes forecast errors as model free, and the test is valid also when the forecasts are produced from unknown models, as for example from forecast survey data. When the forecasts are produced by estimated models, nested or nonnested, it is in general necessary to account for the impact of the model parameter estimation uncertainty on the distribution of the forecast accuracy test (see Clark & McCracken, 2001;West, 1996). In this case, the limiting distribution of the test statistics depends on the specific modeling assumptions made for obtaining the forecast errors (see Clark & McCracken, 2013;West, 2006). West (1996) showed that in some cases the DM approach was asymptotically valid even when forecasts were obtained from estimated models. This happens when the number of in-sample observations is large relative to the number of out-of-sample observations or when in-sample and out-of-sample loss is the same, for example using a quadratic loss as an evaluation criterion for models that are estimated by ordinary least squares (OLS). However, in practice, it is not uncommon to compare forecasts produced by models for which accounting for the model parameter estimation uncertainty is not tractable. In addition, if the objective is to compare forecasting methods as opposed to forecasting models, then Giacomini and White (2006) showed that in an environment with asymptotically nonvanishing estimation uncertainty the DM test can still be applied. For these reasons, the DM test is still widely applied also when forecasts are obtained by estimated models (see Diebold, 2015).One additional reason for the success of the DM test is that the t...