2022
DOI: 10.1109/tsp.2022.3168490
|View full text |Cite
|
Sign up to set email alerts
|

Federated Generalized Bayesian Learning via Distributed Stein Variational Gradient Descent

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
21
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 20 publications
(21 citation statements)
references
References 27 publications
0
21
0
Order By: Relevance
“…Unlike the LMC, the Stein Variational Gradient Descent (SVGD) algorithm applies gradient descent directly to KL(•|π) (see Section 2.2 for the complete definition). SVGD is an important alternative to the Langevin algorithm and already has been used extensively in different settings of machine learning, such as variational auto-encoders [Pu et al, 2017], reinforcement learning [Liu et al, 2017], sequential decision making [Zhang et al, 2018[Zhang et al, , 2019b, generative adversarial networks [Tao et al, 2019] and federated learning [Kassab and Simeone, 2022].…”
Section: Svgdmentioning
confidence: 99%
“…Unlike the LMC, the Stein Variational Gradient Descent (SVGD) algorithm applies gradient descent directly to KL(•|π) (see Section 2.2 for the complete definition). SVGD is an important alternative to the Langevin algorithm and already has been used extensively in different settings of machine learning, such as variational auto-encoders [Pu et al, 2017], reinforcement learning [Liu et al, 2017], sequential decision making [Zhang et al, 2018[Zhang et al, , 2019b, generative adversarial networks [Tao et al, 2019] and federated learning [Kassab and Simeone, 2022].…”
Section: Svgdmentioning
confidence: 99%
“…Furthermore, in an FL framework, the "collapse" of uncertainty in the model parameter space -also known as epistemic uncertainty -to a single model parameter vector prevents agents from properly communicating their respective states of knowledge about the problem. This, in turn, can yield slower convergence [7].…”
Section: Introductionmentioning
confidence: 99%
“…SVGD approximates the posterior distribution using a set of particles, like MC sampling, while also benefiting from the faster convergence of VI through deterministic optimization, rather than sampling. The Distributed SVGD (DSVGD) protocol introduced in [7] extends SVGD to FL (see Fig. 1a).…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations