2019 American Control Conference (ACC) 2019
DOI: 10.23919/acc.2019.8815382
|View full text |Cite
|
Sign up to set email alerts
|

On Increasing Self-Confidence in Non-Bayesian Social Learning over Time-Varying Directed Graphs

Abstract: We study the convergence of the log-linear non-Bayesian social learning update rule, for a group of agents that collectively seek to identify a parameter that best describes a joint sequence of observations. Contrary to recent literature, we focus on the case where agents assign decaying weights to its neighbors, and the network is not connected at every time instant but over some finite time intervals. We provide a necessary and sufficient condition for the rate at which agents decrease the weights and still … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
5
2

Relationship

3
4

Authors

Journals

citations
Cited by 9 publications
(4 citation statements)
references
References 21 publications
0
4
0
Order By: Relevance
“…Then, the beliefs are updated by scaling the combined beliefs by the likelihood of their new observation given that the particular model is the ground truth. Variations of these learning rules have been proposed to handle fixed and time-varying graphs [11], weakly-connected graphs [12], [13], increasing self-confidence [14], compact hypotheses sets [15], and adversarial attacks [16], [17].…”
Section: Introductionmentioning
confidence: 99%
“…Then, the beliefs are updated by scaling the combined beliefs by the likelihood of their new observation given that the particular model is the ground truth. Variations of these learning rules have been proposed to handle fixed and time-varying graphs [11], weakly-connected graphs [12], [13], increasing self-confidence [14], compact hypotheses sets [15], and adversarial attacks [16], [17].…”
Section: Introductionmentioning
confidence: 99%
“…Then, using property (a) of Proposition III.1, the posterior distribution converges in probability to a delta function as 𝑡 → ∞, i.e., lim 𝑡→∞ 𝑓 (𝜑|𝜓(𝜔 1:𝑡 )) = 𝛿 𝜑 𝜃 * (𝜑). This results in (17) converging in probability to…”
Section: Asymptotic Properties Of the Uncertain Likelihood Ratiomentioning
confidence: 99%
“…Several social learning (fusion) rules have been proposed in the literature, including weighted averages [1], [6], geometric averages [7]- [9], constant elasticity of substitution models [10], and minimum operators [11], [12]. These learning rules have been applied to undirected/directed graphs, timevarying graphs [13], [14], weakly-connected graphs [15], [16], agents with increasing self-confidence [17], compact hypothesis sets [18], and under adversarial attacks [19]- [22]. Each approach presents a variation of one of the above learning rules and provides theoretical guarantees (asymptotically) that the agents will learn the true state of the world.…”
Section: Introductionmentioning
confidence: 99%
“…C. A. Uribe is with the Laboratory for Information and Decision Systems -LIDS, Massachusetts Institute of Technology -M.I.T., Cambridge, MA, USA. E-mails: eamojican@unal.edu.co, dryanguasr@unal.edu.co, cauribe@mit.edu, rules have been applied to undirected/directed graphs, timevarying graphs [14], [15], weakly-connected graphs [16], agents with increasing self-confidence [17], compact hypothesis sets [18], uncertain models [19], and under adversarial attacks [20]- [23].…”
Section: Introductionmentioning
confidence: 99%