1995
DOI: 10.1016/0005-1098(95)00089-6
|View full text |Cite
|
Sign up to set email alerts
|

Consistency and relative efficiency of subspace methods

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
19
0
1

Year Published

1999
1999
2019
2019

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 105 publications
(22 citation statements)
references
References 8 publications
2
19
0
1
Order By: Relevance
“…From that point of view, all we can know (from second-order statistics) is the spectrum, and so if, naturally, we want a unique model, the only model we can obtain is the causal stable minimum phase model: the ISS model. The standard approach to SS model fitting is the so-called state-space subspace method (Deistler, Peternell, & Scherer, 1995; Bauer, 2005), and indeed it delivers an ISS model. The alternative approach of fitting a VARMA model (Hannan & Deistler, 1988; Lutkepohl, 1993) is equivalent to getting an ISS model.…”
Section: State Spacementioning
confidence: 99%
See 1 more Smart Citation
“…From that point of view, all we can know (from second-order statistics) is the spectrum, and so if, naturally, we want a unique model, the only model we can obtain is the causal stable minimum phase model: the ISS model. The standard approach to SS model fitting is the so-called state-space subspace method (Deistler, Peternell, & Scherer, 1995; Bauer, 2005), and indeed it delivers an ISS model. The alternative approach of fitting a VARMA model (Hannan & Deistler, 1988; Lutkepohl, 1993) is equivalent to getting an ISS model.…”
Section: State Spacementioning
confidence: 99%
“…Suppose we fit an SS model to data z t , t = 1, …, T using, for example, so-called state-space subspace methods (Deistler et al, 1995; Bauer, 2005) or VARMA methods in, for example, Lutkepohl (1993). Let F^YX, F^XY, F^Y.X, F^XoY be the corresponding GEM estimators.…”
Section: Granger Causalitymentioning
confidence: 99%
“…Emails: {atsiamis,pappasg}@seas.upenn.edu ρ(A) < 1) and is limited to asymptotic results. In [7,8] it is shown that the identification error can decrease as fast as O 1/ √ N up to logarithmic factors, as the number of output data N grows to infinity. While asymptotic results have been established, a finite sample analysis of subspace algorithms remains an open problem [2].…”
mentioning
confidence: 99%
“…i 0 the estimatesN i are consistent, where i 0 ¼ intðdq bic Þ which is the integer closer to the product of d and the optimal lag length for an autoregressive (AR) approximation of z t , chosen by using the Schwarz (1978) criterion over 0 q ( log T) a for some constant 0 < a < 1. Specifically, d > 1 is a sufficient condition in the stationary case (see Deistler et al, 1995), whereas d > 2 is required in the integrated case (see Bauer, 2005b). However, in finite samples the estimatesN i differ for different i, resulting in distinct forecasts.…”
Section: Forecasting By Exploiting Different Values Of Imentioning
confidence: 99%