2021
DOI: 10.36227/techrxiv.14685696.v1
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Networked Federated Multi-Task Learning

Abstract: we characterize the network structure of data such that federated mulit-task learning is possible. <br>

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 39 publications
0
3
0
Order By: Relevance
“…By considering each FL client as a task in MTL, the server can learn and discern the relationships among clients based on their heterogeneous local data. MOCHA [10] expands distributed MTL to the FL context, learning personalized models for each client; however, all clients must participate in each round of FL model training. MOCHA is only applicable to convex models and has limited generalization in deep learning.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…By considering each FL client as a task in MTL, the server can learn and discern the relationships among clients based on their heterogeneous local data. MOCHA [10] expands distributed MTL to the FL context, learning personalized models for each client; however, all clients must participate in each round of FL model training. MOCHA is only applicable to convex models and has limited generalization in deep learning.…”
Section: Related Workmentioning
confidence: 99%
“…To enhance the accuracy of FL in data-heterogeneous scenarios, researchers have introduced various similarity-based personalized federated learning (PFL) approaches [9], including multi-task learning (MTL) [10][11][12], model interpolation [13][14][15], and clustering [16][17], aiming to mitigate the weight divergence issue. However, most methods addressing non-IID FL present considerable drawbacks, as they lead to significant computation and communication overhead.…”
Section: Introductionmentioning
confidence: 99%
“…As a future work, one can put this work into context of the regularizated empirical risk minimization and its special case generalised total variation minimization, see [29]- [30]. Moreover, one can put this work into context a line of work total variation minimisation methods Motivated by a clustered signal; see [31]- [33], which, in contrast to our approach, use a non-smooth variant of total variation.…”
Section: Adaptive Algorithmmentioning
confidence: 99%