2020
DOI: 10.48550/arxiv.2007.12557
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

MPC-enabled Privacy-Preserving Neural Network Training against Malicious Attack

Abstract: In the past decades, the application of secure multiparty computation (MPC) to machine learning, especially privacy-preserving neural network training, has attracted tremendous attention from both academia and industry. MPC enables several data owners to jointly train a neural network while preserving their data privacy. However, most previous works focus on semi-honest threat model which cannot withstand fraudulent messages sent by malicious participants. In this work, we propose a construction of efficient n… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
7
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(7 citation statements)
references
References 42 publications
0
7
0
Order By: Relevance
“…The differences are tiny. Improving model accuracy in MPC is itself an interesting topic [15,56]. We plan to incorporate techniques in the literature as our future work.…”
Section: Discussionmentioning
confidence: 99%
“…The differences are tiny. Improving model accuracy in MPC is itself an interesting topic [15,56]. We plan to incorporate techniques in the literature as our future work.…”
Section: Discussionmentioning
confidence: 99%
“…On the other hand, a pure MPC-based aggregation scheme is not suitable for a large-scale PPML due to the huge communication overheads when evaluating complex functions such as a deep neural network (DNN), and the problem is further exacerbated by the fact that a lot of the PPML clients are resource-constrained mobile devices. For example, training a simple CNN one epoch requires about 7 hours in WAN setting [21]. Server-aided MPC-based schemes such as [22], [23] achieve good efficiency relying on a set of non-colluding servers.…”
mentioning
confidence: 99%
“…Protecting the privacy of global models requires clients to train their ML models over encrypted global models. Existing solutions include HE-based schemes [91,108] and MPC-based schemes [94,109]. However, those solutions may involve large overheads for large-scale ML models such as deep neural networks.…”
Section: Further Discussionmentioning
confidence: 99%
“…On the other hand, a pure MPC-based aggregation scheme is not suitable for a large-scale PPML due to the huge communication overheads when evaluating complex functions such as a deep neural network (DNN), and the problem is further exacerbated by the fact that a lot of the PPML clients are resourceconstrained mobile devices. For example, training a simple CNN one epoch requires about 7 hours in WAN setting [94]. Server-aided MPC-based schemes such as [56,57] achieve good efficiency relying on a set of non-colluding servers.…”
Section: Introductionmentioning
confidence: 99%