Proceedings of the 2021 ACM Symposium on Principles of Distributed Computing 2021
DOI: 10.1145/3465084.3467919
|View full text |Cite
|
Sign up to set email alerts
|

Differential Privacy and Byzantine Resilience in SGD

Abstract: This paper addresses the problem of combining Byzantine resilience with privacy in machine learning (ML). Specifically, we study if a distributed implementation of the renowned Stochastic Gradient Descent (SGD) learning algorithm is feasible with both differential privacy (DP) and ( , )-Byzantine resilience. To the best of our knowledge, this is the first work to tackle this problem from a theoretical point of view. A key finding of our analyses is that the classical approaches to these two (seemingly) orthogo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 17 publications
(5 citation statements)
references
References 34 publications
0
5
0
Order By: Relevance
“…This generalization is critical to quantifying the interplay between DP and BR. Importantly, while Guerraoui et al [18] only give elementary analysis explaining the difficulty of the problem, we show that a careful analysis can help combine DP and BR.…”
Section: A Related Workmentioning
confidence: 71%
See 2 more Smart Citations
“…This generalization is critical to quantifying the interplay between DP and BR. Importantly, while Guerraoui et al [18] only give elementary analysis explaining the difficulty of the problem, we show that a careful analysis can help combine DP and BR.…”
Section: A Related Workmentioning
confidence: 71%
“…However, previous approaches do not apply to our setting for two main reasons; (1) they do not address the privacy of the dataset against an honest-but-curious server, and (2) their underlying notion of robustness are either weaker than or orthogonal to BR. Furthermore, recent works on the combination of privacy and BR in distributed learning either study a weaker privacy model than DP or provide only elementary analyses [9,18,19,29]. We refer the interested reader to Appendix A for an in depth discussion of prior works.…”
Section: Closely Related Prior Workmentioning
confidence: 99%
See 1 more Smart Citation
“…GeoMed [32] updates FL model by selecting gradient based on the geographic median to realize gradient aggregation. Bulyan [33] first implements the Krum-based aggregation and then updates the global model by averaging the gradient closest to the median value. Meanwhile, based on the cosine similarity between the parties' historical gradients, FoolsGold [31] sets the weights of parties to ward off sybils attacks in FL.…”
Section: Byzantine-robust Federated Learningmentioning
confidence: 99%
“…Only a handful of works addressed the interplay between DP and robustness in distributed ML. It was conjectured that ensuring both these requirements is impractical, in the sense that it would require the batch size to grow with the model dimension [34]. However, the underlying analysis relied upon the criterion of (α, f )-Byzantine resilience [12], which has been recently shown to be a restrictive sufficient condition [42].…”
Section: Prior Workmentioning
confidence: 99%