2018
DOI: 10.48550/arxiv.1802.03981
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Spectral Filtering for General Linear Dynamical Systems

Abstract: We give a polynomial-time algorithm for learning latent-state linear dynamical systems without system identification, and without assumptions on the spectral radius of the system's transition matrix. The algorithm extends the recently introduced technique of spectral filtering, previously applied only to systems with a symmetric transition matrix, using a novel convex relaxation to allow for the efficient identification of phases.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
16
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
5
1

Relationship

2
4

Authors

Journals

citations
Cited by 12 publications
(16 citation statements)
references
References 11 publications
0
16
0
Order By: Relevance
“…[25] proved a non-asymptotic guarantee for the identification of partially observed linear dynamics of unknown order. [11] and [12] proposed spectral filtering methods that predict with sublinear regret the next output of unknown partially observed linear systems. When actuation is available Wagenmaker and Jamieson [34] showed that active learning can be used for a faster identification of linear dynamics.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…[25] proved a non-asymptotic guarantee for the identification of partially observed linear dynamics of unknown order. [11] and [12] proposed spectral filtering methods that predict with sublinear regret the next output of unknown partially observed linear systems. When actuation is available Wagenmaker and Jamieson [34] showed that active learning can be used for a faster identification of linear dynamics.…”
Section: Related Workmentioning
confidence: 99%
“…where ξ t is the noise term shown in (12). Note that the covariance of vec(x t x ⊤ t+1 ) is the identity matrix, whose minimum eigenvalue is one.…”
Section: A Proof Of Bound 6a Of Theoremmentioning
confidence: 99%
“…As has been suggested in the previous section, we consider the problem of predicting ŷk+1 in the Huber-like extension of LDS (2), under several assumptions, starting with the identifiability of Hazan et al [18]: Assumption 1. The outputs are generated by the stochastic difference equation (2), assuming:…”
Section: The Problemmentioning
confidence: 99%
“…Throughout both Algorithms 1 and 2, we predict the next output ŷt of the system from inputs X t until time t and outputs Y t−1 until t−1 in an online fashion. There, leading methods [19,18,28,17,30] consider an overparametrisation, where the vector Xt is composed of the inputs to the system at all time-levels up to the current one, convolved with the eigenvectors of a certain Hankel matrix, as well as the outputs at the previous time level, and inputs at the current and previous time levels. Notice that the Hankel matrix is constant and its eigenvectors can be precomputed.…”
Section: The Algorithmsmentioning
confidence: 99%
See 1 more Smart Citation