Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing 2018
DOI: 10.18653/v1/d18-1001
|View full text |Cite
|
Sign up to set email alerts
|

Privacy-preserving Neural Representations of Text

Abstract: This article deals with adversarial attacks towards deep learning systems for Natural Language Processing (NLP), in the context of privacy protection. We study a specific type of attack: an attacker eavesdrops on the hidden representations of a neural text classifier and tries to recover information about the input text. Such scenario may arise in situations when the computation of a neural network is shared across multiple devices, e.g. some hidden representation is computed by a user's device and sent to a c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
138
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 92 publications
(140 citation statements)
references
References 16 publications
(11 reference statements)
2
138
0
Order By: Relevance
“…The notion of similarity is controlled by a parameter ε ≥ 0 that defines the strength of the privacy guarantee (with ε = 0 representing absolute privacy, and ε = ∞ representing null privacy). Even though DP has been applied to domains such as geolocation [4], social networks [40] and deep learning [1,50], less attention has been paid to adapting variants of DP to the context of Natural Language Processing (NLP) and the text domain [15,59].…”
Section: Introductionmentioning
confidence: 99%
“…The notion of similarity is controlled by a parameter ε ≥ 0 that defines the strength of the privacy guarantee (with ε = 0 representing absolute privacy, and ε = ∞ representing null privacy). Even though DP has been applied to domains such as geolocation [4], social networks [40] and deep learning [1,50], less attention has been paid to adapting variants of DP to the context of Natural Language Processing (NLP) and the text domain [15,59].…”
Section: Introductionmentioning
confidence: 99%
“…Our encoder-decoder architecture together with independence assumptions made in the probabilistic model which decomposes a derivation score in several subtasks can be seen as auxiliary tasks as in (Coavoux et al, 2018).…”
Section: Discussion and Related Workmentioning
confidence: 99%
“…For comparison, consider the distance between b 1 and b 2 to a third document, b 3 := Chef 1 , breaks 1 , cooking 1 , record 1 . Using the same word embedding metric, 10 we find that…”
Section: Word Embeddingsmentioning
confidence: 97%
“…All of our results apply equally well had we left stopwords in place. 10 We use the same word2vec-based metric as per our experiments; this is described in §6.…”
Section: Word Embeddingsmentioning
confidence: 99%