2021
DOI: 10.48550/arxiv.2112.08524
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

FLoRA: Single-shot Hyper-parameter Optimization for Federated Learning

Abstract: We address the relatively unexplored problem of hyper-parameter optimization (HPO) for federated learning (FL-HPO). We introduce Federated Loss SuRface Aggregation (FLoRA), the first FL-HPO solution framework that can address use cases of tabular data and gradient boosting training algorithms in addition to stochastic gradient descent/neural networks commonly addressed in the FL literature. The framework enables single-shot FL-HPO, by first identifying a good set of hyper-parameters that are used in a single F… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 25 publications
0
2
0
Order By: Relevance
“…Authors in [22] disentangle the parameter set into local model parameters and global aggregation parameters, then update them iteratively with a communication-efficient algorithm. The authors in [25] introduce federated loss surface aggregation (FLoRA) as an FL-HPO solution framework. However, these works have not consider HPO rates or factors and how they can affect the performance in the FL systems.…”
Section: Related Workmentioning
confidence: 99%
“…Authors in [22] disentangle the parameter set into local model parameters and global aggregation parameters, then update them iteratively with a communication-efficient algorithm. The authors in [25] introduce federated loss surface aggregation (FLoRA) as an FL-HPO solution framework. However, these works have not consider HPO rates or factors and how they can affect the performance in the FL systems.…”
Section: Related Workmentioning
confidence: 99%
“…As a result, most traditional black-box optimizers that require more than one full-fidelity trials are impractical in the FL setting [32]. Thus, multi-fidelity methods, particularly those capable of one-shot optimization [21,50], are more in demand in FedHPO.…”
Section: Uniqueness Of Federated Hyperparameter Optimizationmentioning
confidence: 99%
“…As mentioned in Section 3.1, tasks other than federated supervised learning will be incorporated. At the same time, we aim to extend FEDHPO-B to include different FL settings, e.g., HPO for vertical FL [50]. Another issue the current version has not touched on is the risk of privacy leakage caused by HPO methods [23], which we should provide related metrics and testbeds in the future.…”
mentioning
confidence: 99%