2022
DOI: 10.48550/arxiv.2203.02833
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Tabula: Efficiently Computing Nonlinear Activation Functions for Secure Neural Network Inference

Abstract: Multiparty computation approaches to secure neural network inference traditionally rely on garbled circuits for securely executing nonlinear activation functions. However, garbled circuits require excessive communication between server and client, impose significant storage overheads, and incur large runtime penalties. To eliminate these costs, we propose an alternative to garbled circuits: Tabula, an algorithm based on secure lookup tables. Tabula leverages neural networks' ability to be quantized and employs… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 13 publications
0
3
0
Order By: Relevance
“…In general, activation functions that are non-linear can be very effectively implemented in quantization runtime Lam et al (2022). However these considerations are hardware agnostic.…”
Section: K Overhead Cost Discussionmentioning
confidence: 99%
“…In general, activation functions that are non-linear can be very effectively implemented in quantization runtime Lam et al (2022). However these considerations are hardware agnostic.…”
Section: K Overhead Cost Discussionmentioning
confidence: 99%
“…The most common methods of doing secure ML are with multi-party computation (MPC), homomorphic encryption (HE), or interactive proofs (IPs). As we describe, these methods are either impractical, do not work in the face of malicious adversaries (Knott et al, 2021;Kumar et al, 2020;Lam et al, 2022;Mishra et al, 2020), or do not hide the weights/inputs (Ghodsi et al, 2017b). In this work, we propose practical methods of doing verified ML execution in the face of malicious adversaries.…”
Section: Related Workmentioning
confidence: 99%
“…MPC. One of the most common methods of doing secure ML is with MPCs, in which the computation is shared across multiple parties (Knott et al, 2021;Kumar et al, 2020;Lam et al, 2022;Mishra et al, 2020;Jha et al, 2021). There are a variety of MPC protocols with different guarantees.…”
Section: Related Workmentioning
confidence: 99%