2000
DOI: 10.1111/1467-9868.00244
|View full text |Cite
|
Sign up to set email alerts
|

Bootstrap Confidence Regions Computed from Autoregressions of Arbitrary Order

Abstract: Given a linear time series, e.g. an autoregression of in®nite order, we may construct a ®nite order approximation and use that as the basis for bootstrap con®dence regions. The sieve or autoregressive bootstrap, as this method is often called, is generally seen as a competitor with the better-understood block bootstrap approach. However, in the present paper we argue that, for linear time series, the sieve bootstrap has signi®cantly better performance than blocking methods and offers a wider range of opportuni… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

2
36
0

Year Published

2013
2013
2017
2017

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 73 publications
(40 citation statements)
references
References 23 publications
2
36
0
Order By: Relevance
“…It is valid to approximate the distribution of n θ/2 (X n − µ) for linear, LRD processes under certain conditions (Kapetanios and Psaradakis, 2006). For short-memory, Bühlmann (1997) and Choi and Hall (2000) illustrated that the AR-sieve bootstrap method can give more accurate distribution estimation in comparison to the block bootstrap methods. However, the condition of the process for the AR-sieve bootstrap is more stringent than the block bootstrap methods, for example the process satisfies the conditions of causality, linearity and often invertibility.…”
Section: Construct the Ar-sieve Bootstrap Replicates Frommentioning
confidence: 99%
See 1 more Smart Citation
“…It is valid to approximate the distribution of n θ/2 (X n − µ) for linear, LRD processes under certain conditions (Kapetanios and Psaradakis, 2006). For short-memory, Bühlmann (1997) and Choi and Hall (2000) illustrated that the AR-sieve bootstrap method can give more accurate distribution estimation in comparison to the block bootstrap methods. However, the condition of the process for the AR-sieve bootstrap is more stringent than the block bootstrap methods, for example the process satisfies the conditions of causality, linearity and often invertibility.…”
Section: Construct the Ar-sieve Bootstrap Replicates Frommentioning
confidence: 99%
“…The AR-sieve bootstrap for weakly dependent time processes was introduced by Kreiss (1992) and has been developed by Bühlmann (1997) and Bickel and Bühlmann (1999). In addition, Choi and Hall (2000) showed that the AR-sieve bootstrap was more powerful than block bootstrap methods under causal linear SRD processes based on the error in the coverage probability of a one-sided confidence interval. Kapetanios and Psaradakis (2006) and Poskitt (2008) applied the AR-sieve bootstrap to causal linear LRD time processes.…”
Section: Introductionmentioning
confidence: 99%
“…In order to isolate the impact of an increase in heterogeneity from an increase in average persistence, different degrees of het- (2008). 1 2 Bootstrap methods have also been employed for inference in univariate AR(∞) models, see Kreiss (1997), Choi and Hall (2000) and Gonçalves and Kilian (2004) among others. Extensions to the multivariate case have been proposed by Paparoditis (1996) and Inoue and Kilian (2002).…”
Section: Finite Sample Properties Of Irf Estimators Under Individual mentioning
confidence: 99%
“…This is done in order to capture the dependency structure of neighbouring observations (Liu and Braun 2010). In the literature, there is considerable evidence that the sieve bootstrap, initially proposed by Kreiss (1992) and Bulhmann (1997), usually outperforms the block bootstrap (Choi and Hall 2000). D'Amato et al (2012) apply a sieve bootstrap on the residuals of the Lee Carter model; they take up the Lee Carter parametric model firstly and then re-sample a particular class of the residuals, the so-called centred residuals, according to the design of the typical autoregressive sieve bootstrap.…”
Section: Introductionmentioning
confidence: 99%