“…It is an interesting future work to construct concretely efficient FSSbased MPC protocols with optimal online communication and rounds for multiple parties (i.e., n ≥ 3). While prior work uses a uniform bitwidth for the whole ML inference, the recent work by Rathee et al [27] proposed the mixed bitwidths approach, i.e., operating in low bitwidths and going to high bitwidths only when necessary. They designed new protocols to switch between bitwidths and operations on values of differing bitwidths.…”
Section: Mpc Application To Machine Learningmentioning
confidence: 99%
“…Their approach is interesting and able to obtain better efficiency. While the work [27] only considers private ML inference in the two-party setting, it is worth further developing the mixed bitwidths approach to private ML training and the multi-party setting.…”
Section: Mpc Application To Machine Learningmentioning
confidence: 99%
“…We refer the reader to [16,17] for more MPC libraries and the comparison of known libraries. Many applications can be built upon MPC to protect the privacy of data, including machine learning (see, e.g., [18][19][20][21][22][23][24][25][26][27][28][29][30][31] and references therein), federated learning (see, e.g., [32][33][34][35][36]), data mining [37][38][39][40], auction [41][42][43], genomic analysis [44][45][46], securing databases (see [47] and references therein), and blockchain [48][49][50][51][52][53]. In addition, some techniques underlying MPC protocols can also be used to construct non-interactive zero-knowledge (ZK) proofs [54][55][56][57][58][59][60][61] based on the MPC-in-thehead paradigm, scalable ZK proofs…”
Secure multi-party computation (MPC) allows a set of parties to jointly compute a function on their private inputs, and reveals nothing but the output of the function. In the last decade, MPC has rapidly moved from a purely theoretical study to an object of practical interest, with a growing interest in practical applications such as privacy-preserving machine learning (PPML). In this paper, we comprehensively survey existing work on concretely efficient MPC protocols with both semi-honest and malicious security, in both dishonest-majority and honest-majority settings. We focus on considering the notion of security with abort, meaning that corrupted parties could prevent honest parties from receiving output after they receive output. We present high-level ideas of the basic and key approaches for designing different styles of MPC protocols and the crucial building blocks of MPC. For MPC applications, we compare the known PPML protocols built on MPC, and describe the efficiency of private inference and training for the state-of-the-art PPML protocols. Furthermore, we summarize several challenges and open problems to break though the efficiency of MPC protocols as well as some interesting future work that is worth being addressed. This survey aims to provide the recent development and key approaches of MPC to researchers, who are interested in knowing, improving, and applying concretely efficient MPC protocols.
“…It is an interesting future work to construct concretely efficient FSSbased MPC protocols with optimal online communication and rounds for multiple parties (i.e., n ≥ 3). While prior work uses a uniform bitwidth for the whole ML inference, the recent work by Rathee et al [27] proposed the mixed bitwidths approach, i.e., operating in low bitwidths and going to high bitwidths only when necessary. They designed new protocols to switch between bitwidths and operations on values of differing bitwidths.…”
Section: Mpc Application To Machine Learningmentioning
confidence: 99%
“…Their approach is interesting and able to obtain better efficiency. While the work [27] only considers private ML inference in the two-party setting, it is worth further developing the mixed bitwidths approach to private ML training and the multi-party setting.…”
Section: Mpc Application To Machine Learningmentioning
confidence: 99%
“…We refer the reader to [16,17] for more MPC libraries and the comparison of known libraries. Many applications can be built upon MPC to protect the privacy of data, including machine learning (see, e.g., [18][19][20][21][22][23][24][25][26][27][28][29][30][31] and references therein), federated learning (see, e.g., [32][33][34][35][36]), data mining [37][38][39][40], auction [41][42][43], genomic analysis [44][45][46], securing databases (see [47] and references therein), and blockchain [48][49][50][51][52][53]. In addition, some techniques underlying MPC protocols can also be used to construct non-interactive zero-knowledge (ZK) proofs [54][55][56][57][58][59][60][61] based on the MPC-in-thehead paradigm, scalable ZK proofs…”
Secure multi-party computation (MPC) allows a set of parties to jointly compute a function on their private inputs, and reveals nothing but the output of the function. In the last decade, MPC has rapidly moved from a purely theoretical study to an object of practical interest, with a growing interest in practical applications such as privacy-preserving machine learning (PPML). In this paper, we comprehensively survey existing work on concretely efficient MPC protocols with both semi-honest and malicious security, in both dishonest-majority and honest-majority settings. We focus on considering the notion of security with abort, meaning that corrupted parties could prevent honest parties from receiving output after they receive output. We present high-level ideas of the basic and key approaches for designing different styles of MPC protocols and the crucial building blocks of MPC. For MPC applications, we compare the known PPML protocols built on MPC, and describe the efficiency of private inference and training for the state-of-the-art PPML protocols. Furthermore, we summarize several challenges and open problems to break though the efficiency of MPC protocols as well as some interesting future work that is worth being addressed. This survey aims to provide the recent development and key approaches of MPC to researchers, who are interested in knowing, improving, and applying concretely efficient MPC protocols.
“…Bakshi and Last propose CryptoRNN; it employs HE but requires interaction between the client and the server in order to refresh the ciphertexts [12]. Rathee et al propose SiRNN that relies on novel two-party computation (2PC) protocols and lookup tables, combined with an iterative algorithm, to approximate non-linear math functions [105]. Feng et al tackle privacy-preserving natural language processing [38] by relying on MPC.…”
Section: Privacy-preserving Prediction On Machine Learning Modelsmentioning
We present RHODE, a novel system that enables privacy-preserving training of and prediction on Recurrent Neural Networks (RNNs) in a federated learning setting by relying on multiparty homomorphic encryption (MHE). RHODE preserves the confidentiality of the training data, the model, and the prediction data; and it mitigates the federated learning attacks that target the gradients under a passive-adversary threat model. We propose a novel packing scheme, multi-dimensional packing, for a better utilization of Single Instruction, Multiple Data (SIMD) operations under encryption. With multi-dimensional packing, RHODE enables the efficient processing, in parallel, of a batch of samples. To avoid the exploding gradients problem, we also provide several clip-by-value approximations for enabling gradient clipping under encryption. We experimentally show that the model performance with RHODE remains similar to non-secure solutions both for homogeneous and heterogeneous data distribution among the data holders. Our experimental evaluation shows that RHODE scales linearly with the number of data holders and the number of timesteps, sub-linearly and sub-quadratically with the number of features and the number of hidden units of RNNs, respectively. To the best of our knowledge, RHODE is the first system that provides the building blocks for the training of RNNs and its variants, under encryption in a federated learning setting.
“…Further, inference algorithms that contain mathematical functions such as sigmoid and tanh are more expensive to compute securely; e.g. running a recurrent neural network (RNN) [26] on the standard Google-30 dataset [40] to identify commands, directions, and digits from speech costs 415 MB, ≈ 60,000 rounds, and ≈ 37 seconds [30].…”
Secure machine learning (ML) inference can provide meaningful privacy guarantees to both the client (holding sensitive input) and the server (holding sensitive weights of the ML model) while realizing inferenceas-a-service. Although many specialized protocols exist for this task, including those in the preprocessing model (where a majority of the overheads are moved to an input independent offline phase), they all still suffer from large online complexity. Specifically, the protocol phase that executes once the parties know their inputs, has high communication, round complexity, and latency. Function Secret Sharing (FSS) based techniques offer an attractive solution to this in the trusted dealer model (where a dealer provides input independent correlated randomness to both parties), and 2PC protocols obtained based on these techniques have a very lightweight online phase. Unfortunately, current FSS-based 2PC works (AriaNN, PoPETS 2022; Boyle et al. Eurocrypt 2021; Boyle et al. TCC 2019) fall short of providing a complete solution to secure inference. First, they lack support for math functions (e.g., sigmoid, and reciprocal square root) and hence, are insufficient for a large class of inference algorithms (e.g. recurrent neural networks). Second, they restrict all values in the computation to be of the same bitwidth and this prevents them from benefitting from efficient float-to-fixed converters such as Tensorflow Lite that crucially use low bitwidth representations and mixed bitwidth arithmetic. In this work, we present LLAMA – an end-to-end, FSS based, secure inference library supporting precise low bitwidth computations (required by converters) as well as provably precise math functions; thus, overcoming all the drawbacks listed above. We perform an extensive evaluation of LLAMA and show that when compared with non-FSS based libraries supporting mixed bitwidth arithmetic and math functions (SIRNN, IEEE S&P 2021), it has at least an order of magnitude lower communication, rounds, and runtimes. We integrate LLAMA with the EzPC framework (IEEE EuroS&P 2019) and demonstrate its robustness by evaluating it on large benchmarks (such as ResNet-50 on the ImageNet dataset) as well as on benchmarks considered in AriaNN – here too LLAMA outperforms prior work.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.