Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security 2019
DOI: 10.1145/3319535.3339819
|View full text |Cite
|
Sign up to set email alerts
|

Quotient

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
34
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 138 publications
(34 citation statements)
references
References 24 publications
0
34
0
Order By: Relevance
“…The MPC protocols for text classification with CNNs that we use in this paper, are very similar to existing MPC protocols for image classification with CNNs (Agrawal et al, 2019;Dalskov et al, 2020;Juvekar et al, 2018;Kumar et al, 2020;Mishra et al, 2020). The main distinguishing aspect is that image classification is based on 2-dimensional (2D) CNNs, while for text classification it is common to use 1-dimensional (1D) CNNs in which the filters move in only one direction.…”
Section: Introductionmentioning
confidence: 73%
“…The MPC protocols for text classification with CNNs that we use in this paper, are very similar to existing MPC protocols for image classification with CNNs (Agrawal et al, 2019;Dalskov et al, 2020;Juvekar et al, 2018;Kumar et al, 2020;Mishra et al, 2020). The main distinguishing aspect is that image classification is based on 2-dimensional (2D) CNNs, while for text classification it is common to use 1-dimensional (1D) CNNs in which the filters move in only one direction.…”
Section: Introductionmentioning
confidence: 73%
“…Still, it uses a square function to approximate the nonlinear activation function, which results in lower accuracy. The work of Agrawal et al 30 improves the accuracy by introducing Boolean operation, but it uses the OT technique for multiplication to increase the communication overhead. In this study, We use the new secret sharing technique to reduce the communication overhead and the online rounds.…”
Section: Related Workmentioning
confidence: 99%
“…Other PPML approaches are based on SMC [3,11,24,28,29], where users upload their encrypted data or secret shared data to one or more servers which then train ML models or use trusted execution environments [18,30]. These approaches are different from PROV-FL in which the dataset does not leave the user system.…”
Section: Privacy-preserving Federated Learningmentioning
confidence: 99%
“…In view of these attacks, PPML protocols have been developed. Existing techniques for designing PPML protocols can be broadly classified into four categories: 1) secure multi-party computation techniques, e.g., [3,11,13,28,29,33], 2) homomorphic encryption, e.g., [24,26,35], 3) differential privacy and homomorphic encryption or secure aggregation, e.g., [9,40,42], and 4) leveraging trusted execution environments (e.g., Intel-SGX), e.g., [18,30]. In the private training using secure multi-party computation, the training data is shared using a secret-sharing protocol among a small set of servers (e.g., 2, 3 or 4-server) (e.g., [11,28,29]), and then the training is conducted and the model is secret-shared among participating servers.…”
Section: Introductionmentioning
confidence: 99%