2020
DOI: 10.1177/0278364920937608
|View full text |Cite
|
Sign up to set email alerts
|

Exactly sparse Gaussian variational inference with application to derivative-free batch nonlinear state estimation

Abstract: We present a Gaussian variational inference (GVI) technique that can be applied to large-scale nonlinear batch state estimation problems. The main contribution is to show how to fit both the mean and (inverse) covariance of a Gaussian to the posterior efficiently, by exploiting factorization of the joint likelihood of the state and data, as is common in practical problems. This is different than maximum a posteriori (MAP) estimation, which seeks the point estimate for the state that maximizes the posterior (i.… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
26
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 18 publications
(26 citation statements)
references
References 55 publications
0
26
0
Order By: Relevance
“…Our approach is based on the Exactly Sparse Gaussian Variational Inference (ESGVI) parameter learning framework of Barfoot et al [8], a nonlinear batch state estimation framework that provides a family of scalable estimators from a variational objective. Model parameters can be optimized jointly with the state using a data likelihood objective.…”
Section: Gps Cameramentioning
confidence: 99%
See 4 more Smart Citations
“…Our approach is based on the Exactly Sparse Gaussian Variational Inference (ESGVI) parameter learning framework of Barfoot et al [8], a nonlinear batch state estimation framework that provides a family of scalable estimators from a variational objective. Model parameters can be optimized jointly with the state using a data likelihood objective.…”
Section: Gps Cameramentioning
confidence: 99%
“…Barfoot et al [8] presented the ESGVI framework and showed that model parameters can be jointly optimized along with the state using an Expectation-Maximization (EM) iterative optimization scheme on a data likelihood objective. In the E-step, model parameters are held fixed, and the state is optimized.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations