2020
DOI: 10.2478/popets-2020-0036
|View full text |Cite
|
Sign up to set email alerts
|

FLASH: Fast and Robust Framework for Privacy-preserving Machine Learning

Abstract: Privacy-preserving machine learning (PPML) via Secure Multi-party Computation (MPC) has gained momentum in the recent past. Assuming a minimal network of pair-wise private channels, we propose an efficient four-party PPML framework over rings ℤ2ℓ, FLASH, the first of its kind in the regime of PPML framework, that achieves the strongest security notion of Guaranteed Output Delivery (all parties obtain the output irrespective of adversary’s behaviour). The state of the art ML frameworks such as ABY3 by Mohassel … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
104
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 80 publications
(104 citation statements)
references
References 41 publications
0
104
0
Order By: Relevance
“…Trident achieves the same result in a 4PC model with further performance improvements. FLASH [17] also proposes a 4PC model that achieves malicious security with guaranteed output delivery. QuantizedNN [14] proposes an efficient PPML framework using the quantization scheme of Jacob et al [54] and provides protocols in all combinations of semi-honest/malicious security and honest majority vs dishonest majority corruptions.…”
Section: Privatementioning
confidence: 99%
“…Trident achieves the same result in a 4PC model with further performance improvements. FLASH [17] also proposes a 4PC model that achieves malicious security with guaranteed output delivery. QuantizedNN [14] proposes an efficient PPML framework using the quantization scheme of Jacob et al [54] and provides protocols in all combinations of semi-honest/malicious security and honest majority vs dishonest majority corruptions.…”
Section: Privatementioning
confidence: 99%
“…A number of works that enable privacy-preserving distributed learning of NNs employ MPC approaches where the parties' confidential data is distributed among two [83], [12], three [82], [110], [111], [52], [28], or four servers [26], [27] (2PC, 3PC, and 4PC, resp.). For instance, in the 2PC setting, Mohassel and Zhang describe a system where data owners process and secret-share their data among two non-colluding servers to train various ML models [83], and Agrawal et al propose a framework that supports discretized training of NNs by ternarizing the weights [12].…”
Section: Related Workmentioning
confidence: 99%
“…Wagh et al further improve the efficiency of privacy-preserving NN training on secret-shared data [110] and provide security against malicious adversaries, assuming an honest majority among 3 servers [111]. More recently, 4PC honest-majority malicious frameworks for PPML have been proposed [26], [27]. These works split the trust between more servers and achieve better round complexities than previous ones, yet they do not address NN training among N-parties.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations