2018
DOI: 10.1109/tit.2018.2837050
|View full text |Cite
|
Sign up to set email alerts
|

Social Learning and Distributed Hypothesis Testing

Abstract: This paper considers a problem of distributed hypothesis testing and social learning. Individual nodes in a network receive noisy local (private) observations whose distribution is parameterized by a discrete parameter (hypotheses). The marginals of the joint observation distribution conditioned on each hypothesis are known locally at the nodes, but the true parameter/hypothesis is not known. An update rule is analyzed in which nodes first perform a Bayesian update of their belief (distribution estimate) of ea… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
226
1

Year Published

2018
2018
2023
2023

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 129 publications
(233 citation statements)
references
References 45 publications
2
226
1
Order By: Relevance
“…Geometric averaging and logarithmic opinion pools have a long history in Bayesian analysis and behavioral decision models [40], [41] and they can be also justified under specific behavioral assumptions [42]. The are also quite popular as a non-Bayesian update rule in distributed detection and estimation litrature [43], [44], [10], [14], [45]. In [45] the authors use a logarithmic opinion pool to combine the estimated posterior probability distributions in a Bayesian consensus filter; and show that as a result: the sum of KullbackLeibler divergences between the consensual probability distribution and the local posterior probability distributions is minimized.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Geometric averaging and logarithmic opinion pools have a long history in Bayesian analysis and behavioral decision models [40], [41] and they can be also justified under specific behavioral assumptions [42]. The are also quite popular as a non-Bayesian update rule in distributed detection and estimation litrature [43], [44], [10], [14], [45]. In [45] the authors use a logarithmic opinion pool to combine the estimated posterior probability distributions in a Bayesian consensus filter; and show that as a result: the sum of KullbackLeibler divergences between the consensual probability distribution and the local posterior probability distributions is minimized.…”
Section: Discussionmentioning
confidence: 99%
“…Other relevant results investigate the formation and evolution of beliefs in social networks and subsequent shaping of the individual and mass behavior through social learning [12], [13], [14]. The archetype of such models is the one due to DeGroot [15], where agents update their beliefs to a convex combination of their neighbor's beliefs and the coefficients correspond to the level of confidence that each agent puts in each of her neighbors.…”
Section: Introductionmentioning
confidence: 99%
“…Proof of Theorem 1. When θ TX = θ0, if the inequality in (12) holds, the convergence rate from Lemma 1 will be strictly negative, which implies that, since µ k,i (θ0) is bounded by 1, for any θ = θ0, limi→∞ µ k,i (θ) a.s. = 0, resulting in (12). When θ TX = θ0, the convergence behavior will depend on the sign of the RHS of (6).…”
Section: Partial Approachmentioning
confidence: 99%
“…Many algorithmic approaches have been conceived for this purpose [1][2][3][4], including the non-Bayesian approach, in which agents update their beliefs or opinions by using local streaming observations and by combining information shared by their neighbors. Some of the main studies along these lines rely on consensus and diffusion strategies [5,6], both with linear and log-exponential belief combinations (see, e.g., [7][8][9][10][11][12]). In all of these works, it is assumed that agents share with their neighbors their entire belief vectors.…”
Section: Introduction and Related Workmentioning
confidence: 99%
“…However, with the exception [6][7][8], most prior works focus mainly on strongly-connected networks (i.e., graphs where there is a direct and reverse path between any two agents, in addition to some agents having a self-loop as a sign of confidence in their own information). Under this setting, the limiting (as time elapses) evolution of the individual agents' belief has been shown to converge collectively to the same opinion, which can be the true underlying hypothesis [5,6,10], or a hypothesis minimizing a suitable objective function [9].…”
Section: Introduction and Related Workmentioning
confidence: 99%