2020
DOI: 10.48550/arxiv.2011.10006
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Improved rates for prediction and identification of partially observed linear dynamical systems

Abstract: Identification of a linear time-invariant dynamical system from partial observations is a fundamental problem in control theory. A natural question is how to do so with non-asymptotic statistical rates depending on the inherent dimensionality (order) d of the system, rather than on the sufficient rollout length or on 1 1−ρ(A) , where ρ(A) is the spectral radius of the dynamics matrix. We develop the first algorithm that given a single trajectory of length T with gaussian observation noise, achieves a near-opti… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(6 citation statements)
references
References 12 publications
0
6
0
Order By: Relevance
“…With the advances in high-dimensional statistics [6], there has been a recent shift from asymptotic analysis with infinite data to statistical analysis of system identification with finite samples. Over the past two years there have been significant advances in understanding finite sample system identification for both fully-observed systems [7][8][9][10][11][12][13][14] as well as partially-observed systems [15][16][17][18][19][20][21][22][23][24]. A tutorial can be found in [25].…”
Section: Introductionmentioning
confidence: 99%
“…With the advances in high-dimensional statistics [6], there has been a recent shift from asymptotic analysis with infinite data to statistical analysis of system identification with finite samples. Over the past two years there have been significant advances in understanding finite sample system identification for both fully-observed systems [7][8][9][10][11][12][13][14] as well as partially-observed systems [15][16][17][18][19][20][21][22][23][24]. A tutorial can be found in [25].…”
Section: Introductionmentioning
confidence: 99%
“…Theorem 6 shows that the stochastic Ho-Kalman Algorithm has the same error bounds as its deterministic counterpart, which says the estimation errors for system matrix decrease as fast as O( 1N 1/4 ). Our analysis framework can be easily extended to achieve the optimal error bounds O( 1 √ N ) mentioned in [33,8,18,16].…”
Section: Resultsmentioning
confidence: 99%
“…This result can be improved to O( 1 √ N ) from [8,18,16]. Note that the computational complexity of the Ho-Kalman Algorithm in Alg.…”
Section: System Realization Via Noisy Markov Parameter Gmentioning
confidence: 98%
See 2 more Smart Citations