A new modelling strategy that provides a practical approach to incorporating long-run structural relationships, suggested by economic theory, in an otherwise unrestricted VAR model is applied to construct a small quarterly macroeconometric model of the UK, estimated over 1965q1-1999q4 in nine variables: domestic and foreign outputs, prices and interest rates, oil prices, the nominal effective exchange rate, and real money balances. The aim is to develop a model with a transparent and theoretically coherent foundation. Tests of restrictions on the long-run relations of the model are presented. The dynamic properties of the model are discussed and monetary policy shocks identified.Over the past two decades, there has been a growing interest in developing macroeconomic models with transparent theoretical foundations and flexible dynamics that fit the historical time series data reasonably well. The modelling framework described in the present paper, along with the work of King et al. the first steps towards this aim. However, our work is distinguished from these earlier contributions in three respects. First, we develop a long-run framework suitable for modelling a small open macroeconomy like the UK. The models of King et al., Gali and Crowder et al. are closed economy models suitable for modelling an economy such as the US. The four-variable model of Mellander et al. is made open only through a terms of trade variable added to the three-variable consumption-investment-income model analysed by King et al. and does not have a rich enough structure for the analysis of many open economy macroeconomic problems of interest.Second, we describe a new strategy which provides a practical approach to incorporating the long-run structural relationships suggested by economic theory in an otherwise unrestricted vector autoregressive (VAR) model. Third, we employ new econometric techniques in the construction of the model and in the testing of the long-run properties predicted by the theory. The description of the modelling work not only provides one of the first examples of the use of these techniques in an applied context, but also includes a discussion of some bootstrap experiments designed to investigate the small-sample properties of the tests employed. Hence, the paper reinforces the arguments of the papers cited above by emphasising the (statistical and economic) importance of the long run in macroeconometric modelling. But it also provides a practical demonstration of
A popular account for the demise of the UK's monetary targeting regime in the 1980s blames the fluctuating predictive relationships between broad money and inflation and real output growth. Yet ex post policy analysis based on heavily-revised data suggests no fluctuations in the predictive content of money. In this paper, we investigate the predictive relationships for inflation and output growth using both real-time and heavily-revised data. We consider a large set of recursively estimated Vector Autoregressive (VAR) and Vector Error Correction models (VECM). These models differ in terms of lag length and the number of cointegrating relationships. We use Bayesian model averaging (BMA) to demonstrate that real-time monetary policymakers faced considerable model uncertainty. The in-sample predictive content of money fluctuated during the 1980s as a result of data revisions in the presence of model uncertainty. This feature is only apparent with real-time data as heavily-revised data obscure these fluctuations. Out of sample predictive evaluations rarely suggest that money matters for either inflation or real output. We conclude that both data revisions and model uncertainty contributed to the demise of the UK's monetary targeting regime.JEL Classification: C11, C32, C53, E51, E52.
Methods are described for the appropriate use of data obtained and analysed in real time to represent the output gap. The methods employ cointegrating VAR techniques to model real time measures and realisations of output series jointly. The model is used to mitigate the impact of data revisions; to generate appropriate forecasts that can deliver economically-meaningful output trends and that can take into account the end-of-sample problems encountered in measuring these trends; and to calculate probability forecasts that convey in a clear way the uncertainties associated with the gap measures. The methods are applied to data for the US 1965q4-2004q4 and the improvements over standard methods are illustrated.
We characterise the relationships between preliminary and subsequent measurements for 16 commonly-used UK macroeconomic indicators drawn from two existing real-time data sets and a new nominal variable database. Most preliminary measurements are biased predictors of subsequent measurements, with some revision series affected by multiple structural breaks. To illustrate how these findings facilitate real-time forecasting, we use a vector autoregresion to generate real-time one-step-ahead probability event forecasts for 1990Q1 to 1999Q2. Ignoring the predictability in initial measurements understates considerably the probability of above trend output growth.
SUMMARYAny non-stationary series can be decomposed into permanent (or 'trend') and transitory (or 'cycle') components. Typically some atheoretic pre-filtering procedure is applied to extract the permanent component. This paper argues that analysis of the fundamental underlying stationary economic processes should instead be central to this process. We present a new derivation of multivariate Beveridge-Nelson permanent and transitory components, whereby the latter can be derived explicitly as a weighting of observable stationary processes. This allows far clearer economic interpretations. Different assumptions on the fundamental stationary processes result in distinctly different results, but this reflects deep economic uncertainty. We illustrate with an example using Garratt et al.'s
This chapter describes the empirical work underlying the construction of the UK model, discusses the results obtained from testing its long-run properties, and compares the model with benchmark univariate models of the variables. The description of the modelling work not only provides one of the first examples of the use of the long-run structural cointegrating VAR techniques in an applied context, but it also includes a discussion of bootstrap experiments designed to investigate the small-sample properties of the tests employed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.