2020
DOI: 10.48550/arxiv.2003.07937
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Finite-time Identification of Stable Linear Systems: Optimality of the Least-Squares Estimator

Abstract: We present a new finite-time analysis of the estimation error of the Ordinary Least Squares (OLS) estimator for stable linear time-invariant systems. We characterize the number of observed samples (the length of the observed trajectory) sufficient for the OLS estimator to be (ε, δ)-PAC, i.e., to yield an estimation error less than ε with probability at least 1 − δ. We show that this number matches existing sample complexity lower bounds [1, 2] up to universal multiplicative factors (independent of (ε, δ) and o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
12
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(12 citation statements)
references
References 16 publications
0
12
0
Order By: Relevance
“…With the definitions at hand, we make the following assumptions for solving Problem I. Assumption 1: Consider the social dynamics (1) with noise g j ϑ(k) in (18b), vectors in (25) and (26), matrices in ( 28)- (30). 1) p(k)…”
Section: B Assumptionmentioning
confidence: 99%
See 2 more Smart Citations
“…With the definitions at hand, we make the following assumptions for solving Problem I. Assumption 1: Consider the social dynamics (1) with noise g j ϑ(k) in (18b), vectors in (25) and (26), matrices in ( 28)- (30). 1) p(k)…”
Section: B Assumptionmentioning
confidence: 99%
“…Due to process and observation noise, one intuitive question pertaining to the accuracy of social-system inference arises: how many observations are sufficient for the inference solution to achieve PAC? To answer this question in the context of system-matrix estimation, significant effort has been devoted towards the sample complexity of ordinary least-square estimator in recent three years [25]- [29]. We note that the analysis of sample complexity therein relies on Hanson-Wright inequality [30], which requires zero-mean, unit-variance, subgaussian independent coordinates for noise vectors.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…For distributed and networked dynamical systems, Wiener Filtering [14], [15], structural equation models [16] and autoregressive models [17] are employed to estimate network topology, leveraging additional tools from estimation theory [17], adaptive feedback control [18], optimization theory with sparsity constraints [19], and others. In recent several years, significant effort has been devoted towards the sample-complexity bound of ordinary least-square estimator of system matrix [20]- [25], i.e., the upper bounds on the number of observations sufficient to identify system matrix with prescribed levels of accuracy and confidence (PAC). For example, assuming process noise vectors are i.i.d isotropic subgaussian and have i.i.d coordinates and the real system matrix is stable, Jedra and Proutiere in [20] showed that the upper bound matches existing sample complexity lower bounds up to universal multiplicative factors, a result conjectured in [21]; Sarkar and Rakhlin in [22] removed the assumption of stable system matrix, and derived the finite-time model error bounds and demonstrated that the ordinary least-squares solution is statistically inconsistent when real system matrix is not regular.…”
Section: Introductionmentioning
confidence: 99%
“…In recent several years, significant effort has been devoted towards the sample-complexity bound of ordinary least-square estimator of system matrix [20]- [25], i.e., the upper bounds on the number of observations sufficient to identify system matrix with prescribed levels of accuracy and confidence (PAC). For example, assuming process noise vectors are i.i.d isotropic subgaussian and have i.i.d coordinates and the real system matrix is stable, Jedra and Proutiere in [20] showed that the upper bound matches existing sample complexity lower bounds up to universal multiplicative factors, a result conjectured in [21]; Sarkar and Rakhlin in [22] removed the assumption of stable system matrix, and derived the finite-time model error bounds and demonstrated that the ordinary least-squares solution is statistically inconsistent when real system matrix is not regular. We note the sample complexity bounds obtained in [20]- [24] rely on Hanson-Wright inequality [26] that requires zero-mean, unit-variance, sub-gaussian independent coordinates for noise vectors.…”
Section: Introductionmentioning
confidence: 99%