2010
DOI: 10.1007/978-3-642-17452-0_10
|View full text |Cite
|
Sign up to set email alerts
|

The Bayes Tree: An Algorithmic Foundation for Probabilistic Robot Mapping

Abstract: We present a novel data structure, the Bayes tree, that provides an algorithmic foundation enabling a better understanding of existing graphical model inference algorithms and their connection to sparse matrix factorization methods. Similar to a clique tree, a Bayes tree encodes a factored probability density, but unlike the clique tree it is directed and maps more naturally to the square root information matrix of the simultaneous localization and mapping (SLAM) problem. In this paper, we highlight three insi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
90
0

Year Published

2012
2012
2023
2023

Publication Types

Select...
5
3

Relationship

2
6

Authors

Journals

citations
Cited by 80 publications
(90 citation statements)
references
References 22 publications
0
90
0
Order By: Relevance
“…The complete Gauss-Newton method consists of iteratively applying equations (7), (11), (12), (10), and (8), in that order, until some stopping criterion is satisfied.…”
Section: A the Gauss-newton Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The complete Gauss-Newton method consists of iteratively applying equations (7), (11), (12), (10), and (8), in that order, until some stopping criterion is satisfied.…”
Section: A the Gauss-newton Methodsmentioning
confidence: 99%
“…In that case, the use of the Bayes Tree [8] and fluid relinearization enables the system to be efficiently relinearized at every timestep by applying direct updates only to those (few) rows of the factorR that are modified when relinearization occurs. One obtains the corresponding RISE2 algorithm by replacing lines 4 to 13 (inclusive) of Algorithm 4 with a different incremental update procedure forḡ: writingR on the right-hand side of equation (29) as a stack of (sparse) row vectors, and then using knowledge of how the rows ofR and the elements ofd have been modified in the current timestep, enables the computation of an efficient incremental update toḡ.…”
Section: Rise: Incrementalizing Powell's Dog-legmentioning
confidence: 99%
“…Given more powerful measurement models, as in Chapter 3, the user can be more optimistic when introducing measurement information. We stress that changing to non-Gaussian posteriors does not preclude use of the Bayes (Junction) tree [113], but instead actually casts the Bayes tree as a general framework for reducing computational complexity, regardless of the form and shape of likelihood functions used to assemble the factor graph.…”
Section: Multi-modality: Displacing Assumptionsmentioning
confidence: 99%
“…The Bayes tree data structure [37] can be considered as an intermediate representation between the Cholesky factor and a junction tree. While not obvious in the matrix formulation, the Bayes tree allows a fully incremental algorithm, with incremental variable reordering and fluid relinearization.…”
Section: Pose Graph Optimization Using Smoothing and Mappingmentioning
confidence: 99%