Findings of the Association for Computational Linguistics: ACL 2022 2022
DOI: 10.18653/v1/2022.findings-acl.277
|View full text |Cite
|
Sign up to set email alerts
|

THE-X: Privacy-Preserving Transformer Inference with Homomorphic Encryption

Abstract: The privacy concerns associated with the use of Large Language Models (LLMs) have grown recently with the development of LLMs such as ChatGPT. Differential Privacy (DP) techniques are explored in existing work to mitigate their privacy risks at the cost of generalization degradation. Our paper reveals that the flatness of DP-trained models' loss landscape plays an essential role in the trade-off between their privacy and generalization. We further propose a holistic framework to enforce appropriate weight flat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
8
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 18 publications
(11 citation statements)
references
References 41 publications
(49 reference statements)
0
8
0
Order By: Relevance
“…Two recent studies explore replacing these fundamentally expensive non-linear functions with operators that are more friendly in private inference. For instance, Chen et al [5] use ReLU to substitute all non-linearities in a transformer, and relying on HE for linear operations. However, their architecture requires the ReLU functions to be executed in plaintext by the client, which may reveal the proprietary model owned by the server.…”
Section: Related Workmentioning
confidence: 99%
“…Two recent studies explore replacing these fundamentally expensive non-linear functions with operators that are more friendly in private inference. For instance, Chen et al [5] use ReLU to substitute all non-linearities in a transformer, and relying on HE for linear operations. However, their architecture requires the ReLU functions to be executed in plaintext by the client, which may reveal the proprietary model owned by the server.…”
Section: Related Workmentioning
confidence: 99%
“…• Models per sensitivity: the different type of sensitive data could each be stored in a specific model and a common frontend could manage the access control to each model. • Data privacy mechanisms [352][353][354][355][356][357][358][359][360]: QA can ensure protection of private data by implementing privacypreserving techniques such as homomorphic encryption, local differential privacy, or secure multiparty computation. These techniques can be applied to the input text, token embeddings, and sequence representations in order to protect the data from malicious actors.…”
Section: Data Multi-sensitivity Usage and Protectionmentioning
confidence: 99%
“…To alleviate the privacy problem, recent works (Hao et al 2022;Chen et al 2022) have developed two-party secure inference services for PLMs by secure Multi-Party Computation (MPC). MPC ensures the privacy of user data and model weights, and shares them secretly.…”
Section: Introductionmentioning
confidence: 99%
“…Though designed for Transformer, existing works (Hao et al 2022;Chen et al 2022;Li et al 2022) solely explore the scenario of natural language understanding (NLU) (e.g., on the GLUE (Wang et al 2019) benchmark). Unfortunately, we observe that they have no significant improvements in natural language generation (NLG) tasks (cf., Fig.…”
Section: Introductionmentioning
confidence: 99%