2022
DOI: 10.48550/arxiv.2203.17044
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Efficient Dropout-resilient Aggregation for Privacy-preserving Machine Learning

Ziyao Liu,
Jiale Guo,
Kwok-Yan Lam
et al.

Abstract: Machine learning (ML) has been widely recognized as an enabler of the global trend of digital transformation. With the increasing adoption of data-hungry machine learning algorithms, personal data privacy has emerged as one of the key concerns that could hinder the success of digital transformation. As such, Privacy-Preserving Machine Learning (PPML) has received much attention of the machine learning community, from academic researchers to industry practitioners to government regulators. However, organization… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 41 publications
0
4
0
Order By: Relevance
“…where in (59), we replace each key variable Z k by the groupwise keys (refer to (10)) and in ( 60) and (63), we apply the independence of the groupwise keys (refer to (9)).…”
Section: Conversementioning
confidence: 99%
See 1 more Smart Citation
“…where in (59), we replace each key variable Z k by the groupwise keys (refer to (10)) and in ( 60) and (63), we apply the independence of the groupwise keys (refer to (9)).…”
Section: Conversementioning
confidence: 99%
“…The need to securely perform distributed computation tasks has thus increased tremendously. This work is particularly motivated by the secure aggregation problem [1][2][3][4][5][6][7][8][9][10], which arises recently in federated learning and the core is to securely compute the sum of the inputs available at a number of users without revealing any additional information to a server. While secure aggregation is usually involved with more practical elements that are crucial for machine learning applications, such as user dropouts, peer-to-peer communication among the users etc., in this work we focus on an elemental information theoretic model that is possibly the simplest while capturing the core of secure sum computation (referred to as secure summation), and wish to understand its fundamental limits on communication and randomness cost.…”
Section: Introductionmentioning
confidence: 99%
“…only learn users in federated learning [22,15,98,62,121,116,59,108,91,77]. With user selection, the server may select an arbitrary subset of the K users and securely compute their input sum.…”
Section: Mds Variable Generationmentioning
confidence: 99%
“…In this chapter, we study a basic model of secure summation, particularly motivated by the secure aggregation problem [22,15,98,62,121,116,59,108,91,77], which arises recently in federated learning and the core is to securely compute the sum of the inputs available at a number of users without revealing any additional information to a server. We wish to understand its fundamental limits on communication and randomness cost.…”
Section: Introductionmentioning
confidence: 99%