This book considers the econometric analysis of both stationary and non‐stationary processes, which may be linked by equilibrium relationships. It provides a wide‐ranging account of the main tools, techniques, models, concepts, and distributions involved in the modelling of integrated processes (i.e. those that accumulate the effects of past shocks). Since the focus is on equilibrium concepts, including co‐integration and error‐correction, the analysis begins with a discussion of the application of these concepts to stationary empirical models. Later chapters show how integrated processes can be reduced to this case by suitable transformations that take advantage of co‐integrating (equilibrium) relationships. The concepts of co‐integration and error‐correction models are shown to be fundamental in this modelling strategy. Practical modelling advice and empirical illustrations are provided. Knowledge of econometrics, statistics, and matrix algebra at the level of a final‐year undergraduate or first‐year graduate course in econometrics is sufficient for most of the book. Other mathematical tools are described as they arise.
This systematic and integrated framework for econometric modelling is organized in terms of three levels of knowledge: probability, estimation, and modelling. All necessary concepts of econometrics (including exogeneity and encompassing), models, processes, estimators, and inference procedures (centred on maximum likelihood) are discussed with solved examples and exercises. Practical problems in empirical modelling, such as model discovery, evaluation, and data mining are addressed, and illustrated using the software system PcGive. Background analyses cover matrix algebra, probability theory, multiple regression, stationary and non‐stationary stochastic processes, asymptotic distribution theory, Monte Carlo methods, numerical optimization, and macro‐econometric models. The reader will master the theory and practice of modelling non‐stationary (cointegrated) economic time series, based on a rigorous theory of reduction.
No abstract
This book provides a formal analysis of the models, procedures, and measures of economic forecasting with a view to improving forecasting practice. David Hendry and Michael Clements base the analyses on assumptions pertinent to the economies to be forecast, viz. a non-constant, evolving economic system, and econometric models whose form and structure are unknown a priori. The authors find that conclusions which can be established formally for constant-parameter stationary processes and correctly-specified models often do not hold when unrealistic assumptions are relaxed. Despite the difficulty of proceeding formally when models are mis-specified in unknown ways for non-stationary processes that are subject to structural breaks, Hendry and Clements show that significant insights can be gleaned. For example, a formal taxonomy of forecasting errors can be developed, the role of causal information clarified, intercept corrections re-established as a method for achieving robustness against forms of structural change, and measures of forecast accuracy re-interpreted.
We consider forecasting using a combination, when no model coincides with a non-constant data generation process (DGP). Practical experience suggests that combining forecasts adds value, and can even dominate the best individual device. We show why this can occur when forecasting models are differentially mis-specified, and is likely to occur when the DGP is subject to location shifts. Moreover, averaging may then dominate over estimated weights in the combination. Finally, it cannot be proved that only non-encompassed devices should be retained in the combination. Empirical and Monte Carlo illustrations confirm the analysis.solutions to forecast biases and inefficiencies than pooling forecasts. Moreover, it is less easy to see why a combination need improve over the best of a group, particularly if there are some decidedly poor forecasts in that group.Second, in non-stationary time series, most forecasts will fail in the same direction when forecasting over a period within which a break unexpectedly occurs. Combination is unlikely to provide a substantial improvement over the best individual forecasts in such a setting. Nevertheless, what will occur when forecasting after a location shift depends on the extent of model misspecifications, data correlations, the sizes of breaks and so on, so combination might help. Since a theory of forecasting allowing for model mis-specification interacting with intermittent location shifts has explained many other features of the empirical forecasting literature (see Clements and Hendry 1999), we explore the possibility that it can also account for the benefits from pooling.Third, averaging reduces variance to the extent that separate sources of information are used. Since we allow all models to be differentially mis-specified, such variance reduction remains possible. Nevertheless, we will ignore sample estimation uncertainty to focus on specification issues, so any gains from averaging also reducing that source of variance will be additional to those we delineate. 1 Next, an alternative interpretation of combination is that, relative to a 'baseline' forecast, additional forecasts act like intercept corrections (ICs). It is well known that appropriate ICs can improve forecasting performance not only if there are structural breaks, but also if there are deterministic mis-specifications. Indeed, Clements and Hendry (1999) present eight distinct interpretations of the role that ICs can play in forecasting, and for example, interpret the crosscountry pooling in Hoogstrate et al. (2000) as a specific form of IC.Finally, pooling can also be viewed as an application of the Stein-James 'shrinkage' estimation (see e.g. Judge and Bock 1978). If the unknown future value is viewed as a 'meta-parameter' of which all the individual forecasts are estimates, then averaging may provide a 'better' estimate thereof. Below, we consider whether data-based weighting will be useful when the process is subject to unanticipated breaks. Thus, we evaluate the possible benefits of combining forecasts i...
Linear models are invariant under non-singular, scale-preserving linear transformations, whereas mean square forecast errors (MSFEs) are not. Different rankings may result across models or methods from choosing alternative yet isomorphic representations of a process. One approach can dominate others for comparisons in levels, yet lose to another for differences, to a second for cointegrating vectors and to a third for combinations of variables. The potential for switches in ranking is related to criticisms of the inadequacy of MSFE against encompassing criteria, which are invariant under linear transforms and entail MSFE dominance. An invariant evaluation criterion which avoids misleading outcomes is examined in a Monte Carlo study of forecasting methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.