Proceedings 2021 Network and Distributed System Security Symposium 2021
DOI: 10.14722/ndss.2021.24351
|View full text |Cite
|
Sign up to set email alerts
|

GALA: Greedy ComputAtion for Linear Algebra in Privacy-Preserved Neural Networks

Abstract: Machine Learning as a Service (MLaaS) is enabling a wide range of smart applications on end devices. However, privacy still remains a fundamental challenge. The schemes that exploit Homomorphic Encryption (HE)-based linear computations and Garbled Circuit (GC)-based nonlinear computations have demonstrated superior performance to enable privacypreserved MLaaS. Nevertheless, there is still a significant gap in the computation speed. Our investigation has found that the HE-based linear computation dominates the … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
14
0
2

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 29 publications
(16 citation statements)
references
References 49 publications
(108 reference statements)
0
14
0
2
Order By: Relevance
“…There are many variants of the convolution algorithm for HE [11], [13], [15], [16], [24], but most of them resort to SISO. [12] devise a convolution algorithm based on tile tensoring, and not SISO, which can be an efficient alternative to SISO for specific HE parameter settings and image sizes.…”
Section: B Convolution On Homomorphic Encryptionmentioning
confidence: 99%
See 1 more Smart Citation
“…There are many variants of the convolution algorithm for HE [11], [13], [15], [16], [24], but most of them resort to SISO. [12] devise a convolution algorithm based on tile tensoring, and not SISO, which can be an efficient alternative to SISO for specific HE parameter settings and image sizes.…”
Section: B Convolution On Homomorphic Encryptionmentioning
confidence: 99%
“…Several prior works has tried to mitigate the overheads [11], [13], but HCNN implementations still stay at a proof-of-concept level and target elementary problems such as MNIST and CIFAR-10. Gazelle [14] proposes a convolution algorithm for homomorphic encryption, which is widely adopted and by the following studies [11], [15], [16]. Gazelle and most prior PI CNN implementations have avoided the high overheads of FHE by restricting the CNN models to shallow ones [8], or mixing use of other cryptographic primitives, such as multiparty computation (MPC) [15], [17], [18]; however, MPC solutions require extra user intervention and the associated data communication overheads.…”
Section: Introductionmentioning
confidence: 99%
“…Since the model holder holds the model parameter in the plaintext, executing the linear layer only involves homomorphic operations between the plaintext and the ciphertext. Such type of computation is compatible with mainstream homomorphic optimization methods including GALA [43] and GAZELLE [19]. However, in VerifyML, the linear layer operation cannot be done in the model holder because it is considered malicious.…”
Section: Comparison With Other Workmentioning
confidence: 99%
“…. Note that several efficient parallel homomorphic computation methods[19],[43] with packed ciphertext have been proposed and run on semi-honest or client-malicious models[5],[26],[29] for secure inference. It may be possible to transfer these techniques to our method to speed up triple's generation, but this is certainly non-trivial and we leave it for future work.…”
mentioning
confidence: 99%
“…To reduce the cost of homomorphic linear computation, Zhang et al [170] considers homomorphic linear computation as a sequence addition operations of addition, multiplication, and permutation and then greedy chooses the least expensive operation for every computation step. Froelicher et al proposed SPINDLE [172] that preserves data and model confidentiality and enables the execution of a cooperative gradientdescent and the evaluation of the obtained model even when there are colluding participants.…”
Section: B Cryptographic Privacy-preserving Collaborative Learningmentioning
confidence: 99%