2012
DOI: 10.1016/j.automatica.2011.11.003
|View full text |Cite
|
Sign up to set email alerts
|

A hyperparameter consensus method for agreement under uncertainty

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
13
0

Year Published

2012
2012
2018
2018

Publication Types

Select...
2
2
2

Relationship

2
4

Authors

Journals

citations
Cited by 14 publications
(14 citation statements)
references
References 17 publications
0
13
0
Order By: Relevance
“…The proof of convergence for each Gaussian element in the joint distribution approximation closely follows in our work, even though our algorithms apply to a larger class of Bayesian filters (e.g., map merging [2]). This generality is shared by the approach of Fraser et al [5] using hyperparameters. However, our work enables the early termination of the consensus-based algorithm without the risk of 'double-counting' any single observation, even when the maximum in/out degree and the number of robots are unknown.…”
Section: Introductionmentioning
confidence: 94%
“…The proof of convergence for each Gaussian element in the joint distribution approximation closely follows in our work, even though our algorithms apply to a larger class of Bayesian filters (e.g., map merging [2]). This generality is shared by the approach of Fraser et al [5] using hyperparameters. However, our work enables the early termination of the consensus-based algorithm without the risk of 'double-counting' any single observation, even when the maximum in/out degree and the number of robots are unknown.…”
Section: Introductionmentioning
confidence: 94%
“…In particular, [12] generates informationtheoretically-optimal weights for the LogOP scheme. Combining probability distributions within the exponential family (i.e., probability distributions that can be expressed as exponential functions) is discussed in [13,14]. In the distributed estimation algorithm presented in [16] as well as in our prior work [17,18], the agents combine their local posterior probability distributions using the consensus algorithm, where the multiple consensus loops within each time step are executed much faster than the original time steps of the Bayesian filter.…”
Section: Introductionmentioning
confidence: 99%
“…The third category of algorithms estimates the posterior probability distribution of the states of the target [11,12,13,14,15,16,17,18]. This category forms the most general class of distributed estimation algorithms because these algorithms can be used for estimation over continuous domains, and can incorporate nonlinear target dynamics, heterogeneous nonlinear measure-ment models, and non-Gaussian uncertainties.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Many distributed estimation algorithms use the notion of consensus to estimate the parameters/states of interest (e.g., [9][10][11][12][13][14][15][16]). In a typical consensus algorithm, an agent attempts to reach an agreement with its neighbors by performing a sequential update that brings its estimate closer to the states/parameters of (a subset of) all of its neighbors.…”
Section: Introductionmentioning
confidence: 99%