2020
DOI: 10.1609/aaai.v34i04.5952
|View full text |Cite
|
Sign up to set email alerts
|

Particle Filter Recurrent Neural Networks

Abstract: Recurrent neural networks (RNNs) have been extraordinarily successful for prediction with sequential data. To tackle highly variable and multi-modal real-world data, we introduce Particle Filter Recurrent Neural Networks (PF-RNNs), a new RNN family that explicitly models uncertainty in its internal structure: while an RNN relies on a long, deterministic latent state vector, a PF-RNN maintains a latent state distribution, approximated as a set of particles. For effective learning, we provide a fully differentia… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
45
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
2
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 56 publications
(46 citation statements)
references
References 14 publications
0
45
0
Order By: Relevance
“…This shows that the ability to detect input values outside of the training distribution would be a valuable addition to current DFs. Finally, it would be interesting to compare learning in DFs to similar variational methods such as the ones introduced by Karl et al (2017); Fraccaro et al (2017); Le et al (2018) or the model-free PF-RNNs introduced by Ma et al (2020).…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…This shows that the ability to detect input values outside of the training distribution would be a valuable addition to current DFs. Finally, it would be interesting to compare learning in DFs to similar variational methods such as the ones introduced by Karl et al (2017); Fraccaro et al (2017); Le et al (2018) or the model-free PF-RNNs introduced by Ma et al (2020).…”
Section: Discussionmentioning
confidence: 99%
“…Integrating algorithmic structure into learning methods has been studied for many robotic problems, including state estimation (Haarnoja et al 2016;Jonschkowski and Brock 2016;Jonschkowski et al 2018;Karkus et al 2018;Ma et al 2020), planning (Tamar et al 2016;Karkus et al 2017;Oh et al 2017;Farquhar et al 2018;Guez et al 2018) and control (Donti et al 2017;Okada et al 2017;Amos et al 2018;Pereira et al 2018;Holl et al 2020). Most notably, Karkus et al (2019) combine multiple differentiable algorithms into an end-to-end trainable "Differentiable Algorithm Network" to address the complete task of navigating to a goal in a previously unseen environment using visual observations.…”
Section: Combining Learning and Algorithmsmentioning
confidence: 99%
“…The art of improving the performance of any deep-learning framework is a process of iterated refinements. Currently, there is no single ideal framework that addresses the discontinuous, impulsive and irregular patterns of behaviour associated with irregular-patterned complex sequential datasets [19,42]. These extreme datasets can be found in many different domains, including: health care, traffic, finance, such as stock prices, meteorology, such as rainfall data and so forth.…”
Section: Introductionmentioning
confidence: 99%
“…In the non-convex environment, the RAD-A2C completed greater than 95% of 504 episodes over a range of obstructions and SNRs. There was very little gradient informa-505 tion available in the environments with more obstructions and thus the GS algorithm The PFGRU is an embedding of the BPF into a GRU architecture proposed by Ma 546 et al [26]. As in the BPF, there are a set of particles and weights used for filtering and…”
mentioning
confidence: 99%
“…The first component 561 is the mean squared loss between the mean particle and the predicted quantity. The 562 second component is the evidence lower bound (ELBO) loss that measures the difference in 563 distribution of the particle distribution relative to the observation likelihood, for more 564 details see [26]. The total loss is expressed as,…”
mentioning
confidence: 99%