2019
DOI: 10.48550/arxiv.1905.07357
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Recurrent Kalman Networks: Factorized Inference in High-Dimensional Deep Feature Spaces

Abstract: In order to integrate uncertainty estimates into deep time-series modelling, Kalman Filters (KFs) (Kalman et al., 1960) have been integrated with deep learning models, however, such approaches typically rely on approximate inference techniques such as variational inference which makes learning more complex and often less scalable due to approximation errors. We propose a new deep approach to Kalman filtering which can be learned directly in an end-to-end manner using backpropagation without additional approxim… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(10 citation statements)
references
References 15 publications
0
10
0
Order By: Relevance
“…Conceptually, training a RNN is not ideal as the recurrent units are to some extent learning again the dynamics of the system. A more elegant solution would be to use a CNN to extract the relevant features and then a probabilistic filter similar to [20]. However, this is beyond the scope of the paper.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Conceptually, training a RNN is not ideal as the recurrent units are to some extent learning again the dynamics of the system. A more elegant solution would be to use a CNN to extract the relevant features and then a probabilistic filter similar to [20]. However, this is beyond the scope of the paper.…”
Section: Methodsmentioning
confidence: 99%
“…This is counter intuitive as during prediction, there are no measurement available. To circumvent this issue, we follow [20] and use boolean values to distinguish between filtering and prediction phases and the measurement are artificially set to zero during the prediction phase. It is important to note that the RNN that we use for filtering in our method does not need this additional boolean input as the RNN is not used for prediction.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The main contribution of our addressed approach is to increase the robustness of the hidden initial state value, and according to a specific network structure, we can disentangle the representation into two components: static feature and dynamics. Compared with the state-of-the-art video prediction methods such as DDPAE [17], MCnet [40] and DeepRNN [33] as baselines, we evaluate the effectiveness of our model using two datasets, namely the Bouncing Balls dataset [12] and the Pendulum dataset [5]. The specific prediction process and structure can be seen in Figs.…”
Section: Experiments Settingsmentioning
confidence: 99%
“…Becker et al [14] proposed a Kalman filter network (KFN) in the latent space which takes auto-encoded high dimensional input data whereas the filtering process is simplified by avoiding matrix inversion calculation by assuming diagonal covariance matrices. In contrast to KFN, D-LfD benefits from CNN, inherently more powerful feature extractor without any simplification assumption whereas the robot kinematic and calibration data are also fed to the network in latent space.…”
Section: Introductionmentioning
confidence: 99%