2017
DOI: 10.1109/tac.2017.2690401
|View full text |Cite
|
Sign up to set email alerts
|

Fast Convergence Rates for Distributed Non-Bayesian Learning

Abstract: We consider the problem of distributed learning, where a network of agents collectively aim to agree on a hypothesis that best explains a set of distributed observations of conditionally independent random processes. We propose a distributed algorithm and establish consistency, as well as a non-asymptotic, explicit and geometric convergence rate for the concentration of the beliefs around the set of optimal hypotheses. Additionally, if the agents interact over static networks, we provide an improved learning p… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

2
242
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 189 publications
(244 citation statements)
references
References 59 publications
2
242
0
Order By: Relevance
“…What's interesting is that in [17] the results show that if each agent uses a log-linear information aggregation rule, then radical agents could learn the real state collectively. That means radical agents could avoid information cascades by taking a more rational information aggregation approach.…”
Section: B Social Learning Of Radical Agentsmentioning
confidence: 99%
“…What's interesting is that in [17] the results show that if each agent uses a log-linear information aggregation rule, then radical agents could learn the real state collectively. That means radical agents could avoid information cascades by taking a more rational information aggregation approach.…”
Section: B Social Learning Of Radical Agentsmentioning
confidence: 99%
“…Many algorithmic approaches have been conceived for this purpose [1][2][3][4], including the non-Bayesian approach, in which agents update their beliefs or opinions by using local streaming observations and by combining information shared by their neighbors. Some of the main studies along these lines rely on consensus and diffusion strategies [5,6], both with linear and log-exponential belief combinations (see, e.g., [7][8][9][10][11][12]). In all of these works, it is assumed that agents share with their neighbors their entire belief vectors.…”
Section: Introduction and Related Workmentioning
confidence: 99%
“…There exist several variants of social learning algorithms, which assume different protocols for the distributed propagation of information, as well as different ways of combining the neighbors' beliefs. Most algorithms rely either on consensus [5] or diffusion strategies [6][7][8][9][10], with some works using linear combination of beliefs [5][6][7][8] and other works using logarithmic beliefs [9,10]. However, with the exception [6][7][8], most prior works focus mainly on strongly-connected networks (i.e., graphs where there is a direct and reverse path between any two agents, in addition to some agents having a self-loop as a sign of confidence in their own information).…”
Section: Introduction and Related Workmentioning
confidence: 99%
“…However, with the exception [6][7][8], most prior works focus mainly on strongly-connected networks (i.e., graphs where there is a direct and reverse path between any two agents, in addition to some agents having a self-loop as a sign of confidence in their own information). Under this setting, the limiting (as time elapses) evolution of the individual agents' belief has been shown to converge collectively to the same opinion, which can be the true underlying hypothesis [5,6,10], or a hypothesis minimizing a suitable objective function [9].…”
Section: Introduction and Related Workmentioning
confidence: 99%