2017
DOI: 10.48550/arxiv.1705.09353
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Predictive State Recurrent Neural Networks

Abstract: We present a new model, Predictive State Recurrent Neural Networks (PSRNNs), for filtering and prediction in dynamical systems. PSRNNs draw on insights from both Recurrent Neural Networks (RNNs) and Predictive State Representations (PSRs), and inherit advantages from both types of models. Like many successful RNN architectures, PSRNNs use (potentially deeply composed) bilinear transfer functions to combine information from multiple sources. We show that such bilinear functions arise naturally from state update… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(1 citation statement)
references
References 14 publications
0
1
0
Order By: Relevance
“…A nonlinear extension of PSR has been proposed for deterministic controlled dynamical systems in [Rudary and Singh, 2004]. More recently, building upon reproducing kernel Hilbert space embedding of PSR [Boots et al, 2013], non-linearity is introduced into PSR using recurrent neural networks [Downey et al, 2017, Venkatraman et al, 2017. One of the main differences with these approaches is that our learning algorithm does not rely on back-propagation through time and we instead investigate how the spectral learning method for WFA can be beneficially extended to the nonlinear setting.…”
Section: Introductionmentioning
confidence: 99%
“…A nonlinear extension of PSR has been proposed for deterministic controlled dynamical systems in [Rudary and Singh, 2004]. More recently, building upon reproducing kernel Hilbert space embedding of PSR [Boots et al, 2013], non-linearity is introduced into PSR using recurrent neural networks [Downey et al, 2017, Venkatraman et al, 2017. One of the main differences with these approaches is that our learning algorithm does not rely on back-propagation through time and we instead investigate how the spectral learning method for WFA can be beneficially extended to the nonlinear setting.…”
Section: Introductionmentioning
confidence: 99%