Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining 2022
DOI: 10.1145/3534678.3539237
|View full text |Cite
|
Sign up to set email alerts
|

Collaboration Equilibrium in Federated Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
11
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(15 citation statements)
references
References 14 publications
0
11
0
Order By: Relevance
“…These incentives can include reputation, monetary compensation, or additional computational infrastructure, among others. 146 , 147…”
Section: Challengesmentioning
confidence: 99%
See 2 more Smart Citations
“…These incentives can include reputation, monetary compensation, or additional computational infrastructure, among others. 146 , 147…”
Section: Challengesmentioning
confidence: 99%
“…propose the concept of collaboration equilibrium, where clients are grouped such that no individual client could gain more in another configuration. 146 They employ a Pareto optimization framework and benefit graphs to create clusters of clients that reach this equilibrium. Although this approach exhibits potential for achieving collaborative fairness, it necessitates all local clients’ consent to construct a benefit graph by a neutral third party before the initiation of model training.…”
Section: Challengesmentioning
confidence: 99%
See 1 more Smart Citation
“…For model fairness, Hu et al [ 33 ] proposed the FedMGDA+ method, which performs multi-objective optimization by optimizing the loss function of each FL client individually and simultaneously to avoid sacrificing the model performance of any client. Cui et al [ 34 ] proposed a constrained multi-objective optimization framework, learning a model that satisfies the fairness constraints of all clients with consistent performance by optimizing the agent’s maximal function involving all objectives. Li et al [ 35 ] proposed the Ditto method, employing inter-client fine-tuning to minimize individual losses significantly.…”
Section: Background and Related Workmentioning
confidence: 99%
“…VIRTUAL (Corinzia et al, 2019) is a federated MTL framework for non-convex models based on a hierarchical Bayesian network formed by the central server and the clients, and inference is performed using variational methods. SPO (Cui et al, 2021) applies Specific Pareto Optimization to identify the optimal collaborator sets and learn a hypernetwork for all clients. While also aiming to identify necessary collaborators, SPO adopts a centralized FL setting with clients jointly training the hypernetwork.…”
Section: Model Regularization / Interpolationmentioning
confidence: 99%