2012
DOI: 10.1016/j.jeconom.2012.01.026
|View full text |Cite
|
Sign up to set email alerts
|

Model selection when there are multiple breaks

Abstract: a b s t r a c tWe consider model selection facing uncertainty over the choice of variables and the occurrence and timing of multiple location shifts. General-to-simple selection is extended by adding an impulse indicator for every observation to the set of candidate regressors: see Johansen and Nielsen (2009). We apply that approach to a fat-tailed distribution, and to processes with breaks: Monte Carlo experiments show its capability of detecting up to 20 shifts in 100 observations, while jointly selecting va… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
81
0

Year Published

2012
2012
2019
2019

Publication Types

Select...
7
2

Relationship

5
4

Authors

Journals

citations
Cited by 124 publications
(81 citation statements)
references
References 14 publications
0
81
0
Order By: Relevance
“…The power loss from tighter significance levels is usually not substantial relative to, say, a t-distribution with few degrees of freedom. However, Castle, Doornik and Hendry (2010) show that impulse-indicator saturation (see Hendry, Johansen andSantos, 2008, andNielsen, 2009) is a successful antidote for fat-tailed error processes.…”
Section: Selection Effects and Bias Correctionsmentioning
confidence: 99%
“…The power loss from tighter significance levels is usually not substantial relative to, say, a t-distribution with few degrees of freedom. However, Castle, Doornik and Hendry (2010) show that impulse-indicator saturation (see Hendry, Johansen andSantos, 2008, andNielsen, 2009) is a successful antidote for fat-tailed error processes.…”
Section: Selection Effects and Bias Correctionsmentioning
confidence: 99%
“…When applied to models with fat-tailed error distributions, much of the non-normality can be picked up by the impulse indicators, so that using critical values based on normality for inference is reasonable (Castle, Doornik, and Hendry 2012;Doornik and Hendry 2014, Ch. 15.6).…”
mentioning
confidence: 99%
“…Multi-path search algorithms with tight critical values can handle more candidate variables, N , than T . As implemented in automatic model selection algorithms like Autometrics (see Doornik, 2009), IIS enables jointly locating breaks with selection over variables, functional forms and lags: see Castle, Doornik and Hendry (2012). Some well-known procedures are variants of IIS, such as the Chow (1960) test (sub-sample IIS over T − k + 1 to T without selection), and recursive estimation, which is equivalent to IIS over future samples, reducing indicators one at a time.…”
Section: Unpredictability and Model Selectionmentioning
confidence: 99%