2021
DOI: 10.48550/arxiv.2112.13939
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

SPIDER: Searching Personalized Neural Architecture for Federated Learning

Abstract: Federated learning (FL) is an efficient learning framework that assists distributed machine learning when data cannot be shared with a centralized server due to privacy and regulatory restrictions. Recent advancements in FL use predefined architecture-based learning for all the clients. However, given that clients' data are invisible to the server and data distributions are non-identical across clients, a predefined architecture discovered in a centralized setting may not be an optimal solution for all the cli… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 20 publications
0
4
0
Order By: Relevance
“…These methods still search for a global model and failed to do personalization. SPIDER (Mushtaq et al 2021) added a customized architecture search phase during the client local computing while the overhead is very large for transmitting the whole supernet and conducting NAS on the local clients. FedRL-NAS (Yao et al 2021) sent only the subspace to each client, but it still only searched for a global model and had three phases: warm-up, searching and training which is also inefficient.…”
Section: Federated Neural Architecture Searchmentioning
confidence: 99%
See 1 more Smart Citation
“…These methods still search for a global model and failed to do personalization. SPIDER (Mushtaq et al 2021) added a customized architecture search phase during the client local computing while the overhead is very large for transmitting the whole supernet and conducting NAS on the local clients. FedRL-NAS (Yao et al 2021) sent only the subspace to each client, but it still only searched for a global model and had three phases: warm-up, searching and training which is also inefficient.…”
Section: Federated Neural Architecture Searchmentioning
confidence: 99%
“…Federated Neural Architecture Search (FedNAS) (He, Annavaram, and Avestimehr 2020) was proposed to directly conduct neural architecture search (NAS) in the federated learning setting, with which each client can obtain a model of a different structure along with different weights. However, previous work either search for only a global neural architecture shared among all or manually defined clusters of the clients instead of personalized architecture for each client (Yao et al 2021;Garg, Saha, and Dutta 2021;Laskaridis, Fernandez-Marques, and Dudziak 2022) or incrementally carry out a neural architecture search on each client (He, Annavaram, and Avestimehr 2020;Mushtaq et al 2021). Such simple replacements of fixed models in conventional personalized federated learning with an NAS supernet still cannot provide an insight on personalizing models without manual settings.…”
Section: Introductionmentioning
confidence: 99%
“…In each training round, a device trains a different slice of the NN. Federated NAS [35,74,102] techniques are proposed, using subsets of shared common structure to allow for device personalizing, aiming for better accuracy in non-iid cases and efficient models for inference. An advantage compared to previous techniques is that the devices' NN models can be independently optimized for their hardware, however, this comes at the cost of exploring the architecture search space, which can be resource-hungry.…”
Section: Nn Architecture Heterogeneity Based On Fedavgmentioning
confidence: 99%
“…Furthermore, such optimization is simple so that the accuracy gap among sub-models should be bridged through some advanced training techniques. Newly, Mushtaq et al [47] apply continuous differentiable relaxation and gradient descent, but it is significantly sensitive to hyperparameter choices.…”
Section: Related Workmentioning
confidence: 99%