2012
DOI: 10.3182/20120711-3-be-2027.00271
|View full text |Cite
|
Sign up to set email alerts
|

Sliced Inverse Regression for the Identification of Dynamical Systems

Abstract: The estimation of nonlinear functions can be challenging when the number of independent variables is high. This difficulty may, in certain cases, be reduced by first projecting the independent variables on a lower dimensional subspace before estimating the nonlinearity. In this paper, a statistical nonparametric dimension reduction method called sliced inverse regression is presented and a consistency analysis for dynamically dependent variables is given. The straightforward system identification application i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2012
2012
2017
2017

Publication Types

Select...
3

Relationship

2
1

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 17 publications
(25 reference statements)
0
3
0
Order By: Relevance
“…Some of these approaches have already been applied in a system identification setting, such as the sliced inverse regression (SIR) [11], the directional regression (DR) [10], and the discretized directional regression (DDR) [28]. More information about using inverse regression for system identification can be found in [14,15]. Another promising, but computationally more expensive, dimension reduction method is the minimum average variance estimation (MAVE) method [13,27], which uses an iterative forward regression approach.…”
Section: Dimension Reductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Some of these approaches have already been applied in a system identification setting, such as the sliced inverse regression (SIR) [11], the directional regression (DR) [10], and the discretized directional regression (DDR) [28]. More information about using inverse regression for system identification can be found in [14,15]. Another promising, but computationally more expensive, dimension reduction method is the minimum average variance estimation (MAVE) method [13,27], which uses an iterative forward regression approach.…”
Section: Dimension Reductionmentioning
confidence: 99%
“…Using a standardized distribution for the applied regressors ϕ, the inverse regression curve E(ϕ|y) lies in the space spanned by the original matrix B. A principal component analysis on the covariance matrix of E(ϕ|y) will yield the estimate of B [10,11,14,15].…”
Section: Inverse Regressionmentioning
confidence: 99%
“…The SIR estimator is illustrated for a simple example in Figure 2. The consistency of the SIR estimator is shown in Li [1991] for the case of independent identically distributed regressors and generalized to allow finite dependence in Lyzell and Enqvist [2011]. A MATLAB implementation is freely available via Cook et al [2011].…”
Section: Sliced Inverse Regressionmentioning
confidence: 99%