2021
DOI: 10.48550/arxiv.2107.03311
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

RoFL: Robustness of Secure Federated Learning

Abstract: Federated Learning is an emerging decentralized machine learning paradigm that allows a large number of clients to train a joint model without the need to share their private data. Participants instead only share ephemeral updates necessary to train the model. To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation; clients encrypt their gradient updates, and only the aggregated model is revealed to the server. Achieving this level of data protection, however, … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 34 publications
0
4
0
Order By: Relevance
“…Communication efficiency is achieved by sending random vectors (shares) to mask users' local model in small size, but, initial studies [14], [15], [16], [17], [18], [19], [20], [21], [22], [23] do not assume that Byzantine users inject contaminated shares or contaminated aggregated shares. RoFL [30] introduces a mechanism of checking final aggregation results using commitments to users' local models. The server aggregates all the users' commitments and compares the resulting commitment with the aggregation result of all local models.…”
Section: Related Workmentioning
confidence: 99%
“…Communication efficiency is achieved by sending random vectors (shares) to mask users' local model in small size, but, initial studies [14], [15], [16], [17], [18], [19], [20], [21], [22], [23] do not assume that Byzantine users inject contaminated shares or contaminated aggregated shares. RoFL [30] introduces a mechanism of checking final aggregation results using commitments to users' local models. The server aggregates all the users' commitments and compares the resulting commitment with the aggregation result of all local models.…”
Section: Related Workmentioning
confidence: 99%
“…For example, in [102], differentially private noises are added to users' models to guarantee privacy during weighted averaging in aggregation. TEE in [74], pseudo-random functions in [103], MAC-like technique in [104], homomorphic hash in [106], zero-knowledge proofs in [107], and commitment scheme in [105] are deployed to guarantee that the server correctly aggregates the sum from FL users. Model Quantization;TEE [75] [76] [78] Model Quantization [79] Coding approach [77] Model spasification [80] [81] Determined by the size of submodels Submodel aggregation [82] So far, we have reviewed masking-based aggregation protocols to protect users' model privacy and global model privacy.…”
Section: Masking-based Aggregationmentioning
confidence: 99%
“…Because this class of attacks is particularly disruptive, during the years, many mitigation techniques [13,16,49,85] have been proposed. Burkhalt et al [11] presented a systematic study to assess the robustness of FL, extending FL's secure aggregation technique proposed in [10]. Burkhalt et al [11] integrated a variety of properties and constraints on model updates using zero-knowledge proof, which is shown to improve FL's resilience against malicious participants who attempt to backdoor the learned model.…”
Section: Backdooring Federated Learningmentioning
confidence: 99%