2006
DOI: 10.1111/j.1467-9892.2006.00481.x
|View full text |Cite
|
Sign up to set email alerts
|

Partial autocorrelation parameterization for subset autoregression

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
25
0

Year Published

2012
2012
2021
2021

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 28 publications
(25 citation statements)
references
References 36 publications
0
25
0
Order By: Relevance
“…This recursion can be used to define a transformation B:false(π1,,πpfalse)false(ϕ1,,ϕpfalse) that is one‐to‐one, continuous and differentiable inside the admissible region. This parameterisation has the advantage that in the π ‐space, the admissible region is simply the p ‐dimensional cube with boundary surfaces corresponding to ±1, while in the ϕ ‐space it is very complicated (see for instance McLeod & Zhang ). As an illustration, for p =2, the transformation is simply ϕ1=π1false(1π2false) and ϕ2=π2.…”
Section: Modeling and Estimationmentioning
confidence: 99%
See 1 more Smart Citation
“…This recursion can be used to define a transformation B:false(π1,,πpfalse)false(ϕ1,,ϕpfalse) that is one‐to‐one, continuous and differentiable inside the admissible region. This parameterisation has the advantage that in the π ‐space, the admissible region is simply the p ‐dimensional cube with boundary surfaces corresponding to ±1, while in the ϕ ‐space it is very complicated (see for instance McLeod & Zhang ). As an illustration, for p =2, the transformation is simply ϕ1=π1false(1π2false) and ϕ2=π2.…”
Section: Modeling and Estimationmentioning
confidence: 99%
“…Following Box, Jenkins & Reinsel (), the exact log‐likelihood function is given byfalse(bold-italicθfalse|boldyfalse)=12nlogσ2+1σ2false(boldyboldXbold-italicβfalse)Mn1(ϕ)(yXβ)+log||boldMnfalse(bold-italicϕfalse)+C,where C is a constant independent of the parameter vector θ . Considering the reparameterisation given in and dropping constant terms (McLeod & Zhang ), we can write the log‐likelihood function asfalse(bold-italicθfalse|boldyfalse)=12nlogσ2+1σ2S(π,β)+loggp,where π =( π 1 ,…, π p ) ⊤ , gp=false|boldMn(ϕ)false|=false|boldMp(ϕ)false|=false∏j=1p(1πj2)j andSfalse(bold-italicπ,bold-italicβfalse)=λDfalse(boldy,bold-italicβfalse)λ,with D ( y , β ) being the ( p +1)×( p +1) matrix with ( i , j )‐entry which is the sum of n −( i −1)−( j −1) squares and lagged products, defined by:Di,…”
Section: Modeling and Estimationmentioning
confidence: 99%
“…Some attempts have been made to circumvent this problem by examining the problem in partial autocorrelation space (McLeod and Zhang, 2006). However, as with most forwardselection-type model selection approaches, such procedures suffer from instability (Breiman, 1996), and in the case of all subsets selection, they quickly become computationally infeasible as k grows.…”
Section: Introductionmentioning
confidence: 99%
“…The existing LASSO procedures for AR models operate in coefficient space and are based on the conditional likelihood due to the complexity of the complete data likelihood in coefficient space (Wang et al, 2007;Hsu et al, 2008;Nardi and Rinaldo, 2008). For a chosen order k, the regular LASSO procedure estimates an AR model by finding the coefficients f = (f 1 , .…”
Section: Introductionmentioning
confidence: 99%
“…The least absolute shrinkage and selection operator (LASSO) procedure (Tibshirani, 1996) was developed to overcome similar problems in the regular linear regression setting and has recently been adapted to the AR model setting. The existing LASSO procedures for AR models operate in coefficient space and are based on the conditional likelihood due to the complexity of the complete data likelihood in coefficient space (Wang et al, 2007;Hsu et al, 2008;Nardi and Rinaldo, 2008). For a chosen order k, the regular LASSO procedure estimates an AR model by finding the coefficients f = (f 1 , .…”
Section: Introductionmentioning
confidence: 99%