ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2021
DOI: 10.1109/icassp39728.2021.9414903
|View full text |Cite
|
Sign up to set email alerts
|

Federated Learning from Big Data Over Networks

Abstract: This paper formulates and studies a novel algorithm for federated learning from large collections of local datasets. This algorithm capitalizes on an intrinsic network structure that relates the local datasets via an undirected "empirical" graph. We model such big data over networks using a networked linear regression model. Each local dataset has individual regression weights. The weights of close-knit sub-collections of local datasets are enforced to deviate only little. This lends naturally to a network Las… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
2
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 9 publications
(4 citation statements)
references
References 16 publications
(18 reference statements)
0
2
0
Order By: Relevance
“…the loss and regularizer function for ridge regression in (10), the action-value function can be expressed as…”
Section: Procedures At Client Kmentioning
confidence: 99%
See 1 more Smart Citation
“…the loss and regularizer function for ridge regression in (10), the action-value function can be expressed as…”
Section: Procedures At Client Kmentioning
confidence: 99%
“…Several challenges associated with the practical implementation of FL, such as communication efficiency [4], privacy preservation [5], byzantine attacks [6], and asynchronous behavior of devices and communication links [7], have been studied extensively in the literature. In contrast to single-server methodologies, multiserver architectures [8], [9] and fully distributed architectures [10] have also been studied recently. This paper deals with a fully distributed architecture.…”
Section: Introductionmentioning
confidence: 99%
“…Two specific choices are φ(v) := v 2 , which is used by nLasso [25], and φ(v) := (1/2) v 2 2 which is used by "MOCHA" [54]. Another recent FL method for networked data uses the choice φ(v) := v 1 [50].…”
Section: Generalized Total Variation Minimizationmentioning
confidence: 99%
“…The FL algorithm MOCHA [54] is obtained from (4) for the choice φ(v) := v 2 2 . Another special case of (4), obtained for the choice φ(v) := v 1 , has been studied recently [50].…”
Section: Generalized Total Variation Minimizationmentioning
confidence: 99%