2009
DOI: 10.1017/s0266466608090464
|View full text |Cite
|
Sign up to set email alerts
|

Bias Reduction and Likelihood-Based Almost Exactly Sized Hypothesis Testing in Predictive Regressions Using the Restricted Likelihood

Abstract: Difficulties with inference in predictive regressions are generally attributed to strong persistence in the predictor series. We show that the major source of the problem is actually the nuisance intercept parameter, and we propose basing inference on the restricted likelihood, which is free of such nuisance location parameters and also possesses small curvature, making it suitable for inference. The bias of the restricted maximum likelihood (REML) estimates is shown to be approximately 50% less than that of t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
14
0

Year Published

2011
2011
2020
2020

Publication Types

Select...
9

Relationship

1
8

Authors

Journals

citations
Cited by 32 publications
(16 citation statements)
references
References 47 publications
(66 reference statements)
2
14
0
Order By: Relevance
“…ii) Calculate the restricted likelihood ratio test (RLRT) statistic of Chen and Deo (2009a), (2009b), described in Appendix A, and obtain c and c by inverting the test statistic, using the confidence level α 1 in Table 1 corresponding to the estimated value of δ. Note that a different α 1 is used depending on whether one is testing against a positive or a negative alternative.…”
Section: Tablementioning
confidence: 99%
See 1 more Smart Citation
“…ii) Calculate the restricted likelihood ratio test (RLRT) statistic of Chen and Deo (2009a), (2009b), described in Appendix A, and obtain c and c by inverting the test statistic, using the confidence level α 1 in Table 1 corresponding to the estimated value of δ. Note that a different α 1 is used depending on whether one is testing against a positive or a negative alternative.…”
Section: Tablementioning
confidence: 99%
“…The column labeled ρ OLS gives the OLS estimates of the autoregressive root ρ. The last 2 columns give the 95% confidence intervals for the autoregressive root ρ and the corresponding local-to-unity parameter c, obtained by inverting the Chen and Deo (2009a), (2009b) Section III and should be approximately normally distributed for all predictor variables, whereas for the scaled OLS t-test, the normal approximation is unlikely to hold given the persistence and endogeneity of the predictors. Empirical Results for Tests of Long-Run Predictability The test results in Table 5 show that there is no robust evidence of return predictability using any of the 3 valuation ratios.…”
Section: Long-run Stock Return Predictabilitymentioning
confidence: 99%
“…A.vanGiersbergen@uva.nl asymptotic distributions are unaffected by the inclusion of deterministic components such as intercept and trend, the latter paper shows that the Bartlett correction factors highly depend upon these components. Moreover, the results obtained in this paper are theoretically important to justify the good finite sample performance of inference on the basis of restricted maximum likelihood in AR(2) models; see for instance DEO (2009) andDEO (2012). Note that in contrast to the AR(1) model, there are no exact inference techniques available in the higher-order AR models; see for instance ANDREWS and CHEN (1994).…”
Section: Introductionmentioning
confidence: 79%
“…Due to the dependence between U t and V t , researchers have found that the least squares estimator for β based on the first equation in (1) is biased in finite samples when the regressor {X t } is nearly integrated (see Stambaugh, 1999), and some bias-corrected inferences have been proposed in the literature such as the linear projection method in Amihud and Hurvich (2004) and Chen and Deo (2009). A comprehensive summary of research for model (1) can be found in Phillips and Lee (2013).…”
Section: Introductionmentioning
confidence: 99%