2015 American Control Conference (ACC) 2015
DOI: 10.1109/acc.2015.7172262
|View full text |Cite
|
Sign up to set email alerts
|

Nonasymptotic convergence rates for cooperative learning over time-varying directed graphs

Abstract: We study the problem of distributed hypothesis testing with a network of agents where some agents repeatedly gain access to information about the correct hypothesis. The group objective is to globally agree on a joint hypothesis that best describes the observed data at all the nodes. We assume that the agents can interact with their neighbors in an unknown sequence of time-varying directed graphs.Following the pioneering work of Jadbabaie, Molavi, Sandroni, and Tahbaz-Salehi, we propose local learning dynamics… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

4
87
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 61 publications
(91 citation statements)
references
References 29 publications
4
87
0
Order By: Relevance
“…This implies that under Assumption 4 Theorem 3 provides a tighter asymptotic rate than that in [32] and [31]. Hence, Theorem 3 strengthens Theorem 2 by extending the large deviation to larger class of distributions and by capturing the complete effect of nodes' influence in the network and the local observation statistics.…”
Section: Example 1 (Gaussian Distribution and Mixtures) Letmentioning
confidence: 56%
“…This implies that under Assumption 4 Theorem 3 provides a tighter asymptotic rate than that in [32] and [31]. Hence, Theorem 3 strengthens Theorem 2 by extending the large deviation to larger class of distributions and by capturing the complete effect of nodes' influence in the network and the local observation statistics.…”
Section: Example 1 (Gaussian Distribution and Mixtures) Letmentioning
confidence: 56%
“…Geometric averaging and logarithmic opinion pools have a long history in Bayesian analysis and behavioral decision models [40], [41] and they can be also justified under specific behavioral assumptions [42]. The are also quite popular as a non-Bayesian update rule in distributed detection and estimation litrature [43], [44], [10], [14], [45]. In [45] the authors use a logarithmic opinion pool to combine the estimated posterior probability distributions in a Bayesian consensus filter; and show that as a result: the sum of KullbackLeibler divergences between the consensual probability distribution and the local posterior probability distributions is minimized.…”
Section: Discussionmentioning
confidence: 99%
“…≤ λ 1 (I + A) = 2. Susequently, we can employ the eigendecomposition of (I + A) to analyze the behavior of (I + A) t+1 in (10). Specifically, we can take a set of biorthonormal vectors l i , r i as the left and right eigenvectors corresponding to the ith eigenvalue of I +A, satisfying: l i 2 = r i 2 = 1, l T i r i = 1 for all i and l T i r j = 0, i = j; in particular, l 1 = r 1 = (1/ √ n)1.…”
Section: A Proof Of Theorem 1 Maximum Likelihood Estimationmentioning
confidence: 99%
See 2 more Smart Citations