ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2021
DOI: 10.1109/icassp39728.2021.9415026
|View full text |Cite
|
Sign up to set email alerts
|

Multi-Tier Federated Learning for Vertically Partitioned Data

Abstract: We consider decentralized model training in tiered communication networks. Our network model consists of a set of silos, each holding a vertical partition of the data. Each silo contains a hub and a set of clients, with the silo's vertical data shard partitioned horizontally across its clients. We propose Tiered Decentralized Coordinate Descent (TDCD), a communication-efficient decentralized training algorithm for such two-tiered networks. To reduce communication overhead, the clients in each silo perform mult… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3

Relationship

2
5

Authors

Journals

citations
Cited by 10 publications
(6 citation statements)
references
References 15 publications
0
6
0
Order By: Relevance
“…There are some methods that applied Split Learning to a VFDL scenario based on an active/passive party architecture [12,13]. In addition to vertical federated learning scenarios, Split Learning can also be applied to horizontal federated learning scenarios [16,27] or hybrid architectures [28,29].…”
Section: Training Loop Parallelization Approachmentioning
confidence: 99%
“…There are some methods that applied Split Learning to a VFDL scenario based on an active/passive party architecture [12,13]. In addition to vertical federated learning scenarios, Split Learning can also be applied to horizontal federated learning scenarios [16,27] or hybrid architectures [28,29].…”
Section: Training Loop Parallelization Approachmentioning
confidence: 99%
“…Hence, each client is able to train a local model on these samples. Differently, in a vertically partitioned dataset [DP21], a client may hold part of the features of each training sample while the other parts might be held by other FL clients. In this FL type, the clients are not able to locally train a model without collecting the missing information of each sample from other clients.…”
Section: Federated Learning and Inference Attacksmentioning
confidence: 99%
“…A client uses the same subset of sample IDs 0 , for all iterations. During local training, when a client executes (5), it recalculates its own local intermediate information Φ Remark 2. We assume that every client has the labels for all of its samples.…”
Section: Proposed Algorithmmentioning
confidence: 99%
“…This work is supported by the Rensselaer-IBM AI Research Collaboration (http://airc.rpi.edu), part of the IBM AI Horizons Network (http://ibm.biz/AIHorizons), and by the National Science Foundation under grants CNS 1553340 and CNS 1816307. A preliminary conference version of this work has been published in [5].…”
Section: Acknowledgmentsmentioning
confidence: 99%