2015 49th Asilomar Conference on Signals, Systems and Computers 2015
DOI: 10.1109/acssc.2015.7421302
|View full text |Cite
|
Sign up to set email alerts
|

Large deviation analysis for learning rate in distributed hypothesis testing

Abstract: This paper considers a problem of distributed hypothesis testing and cooperative learning. Individual nodes in a network receive noisy local (private) observations whose distribution is parameterized by a discrete parameter (hypotheses). The conditional distributions are known locally at the nodes, but the true parameter/hypothesis is not known. We consider a social ("non-Bayesian") learning rule from previous literature, in which nodes first perform a Bayesian update of their belief (distribution estimate) of… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
12
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(12 citation statements)
references
References 9 publications
(22 reference statements)
0
12
0
Order By: Relevance
“…Thus, based on (13), we must have µ i,t(ω) (θ ) > 0, yielding the desired contradiction. With η(ω) min{γ 1 − δ, γ 2 (ω)} > 0, one can easily verify the following by referring to (13):…”
Section: Proof Of Theoremmentioning
confidence: 97%
See 3 more Smart Citations
“…Thus, based on (13), we must have µ i,t(ω) (θ ) > 0, yielding the desired contradiction. With η(ω) min{γ 1 − δ, γ 2 (ω)} > 0, one can easily verify the following by referring to (13):…”
Section: Proof Of Theoremmentioning
confidence: 97%
“…The key point of distinction among such rules stems from the specific manner in which neighboring opinions are aggregated. Specifically, linear opinion pooling is studied in [4][5][6], whereas log-linear opinion pooling is studied in [7][8][9][10][11][12][13][14]. Under appropriate conditions on the observation model and the network structure, each of these approaches enable every agent to learn the true state exponentially fast, with probability 1.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…This class of problems has been studied for several decades, initially for scenarios involving a centralized fusion center [1], and more recently in fully distributed settings where agents are interconnected over a network [2][3][4][5][6][7][8][9][10][11][12][13]. The distributed algorithms provided in these latter papers require each agent to iteratively combine belief vectors obtained from their neighbors with Bayesian updates involving their local signals [2][3][4][5][6][7][8][9][10][11][12][13]. These rules ensure that all agents asymptotically learn the true state of the world, with the main differences being in the rate of learning.…”
Section: Introductionmentioning
confidence: 99%