Proceedings of the 16th International Conference on Parsing Technologies and the IWPT 2020 Shared Task on Parsing Into Enhanced 2020
DOI: 10.18653/v1/2020.iwpt-1.7
|View full text |Cite
|
Sign up to set email alerts
|

Obfuscation for Privacy-preserving Syntactic Parsing

Abstract: The goal of homomorphic encryption is to encrypt data such that another party can operate on it without being explicitly exposed to the content of the original data. We introduce an idea for a privacy-preserving transformation on natural language data, inspired by homomorphic encryption. Our primary tool is obfuscation, relying on the properties of natural language. Specifically, a given English text is obfuscated using a neural model that aims to preserve the syntactic relationships of the original sentence s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
1
1

Relationship

1
4

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 19 publications
(20 reference statements)
0
3
0
Order By: Relevance
“…This finding is quite close in spirit to our conclusion in Section 4.5. Hu et al (2020) also put forth efforts to modify the text in syntactic tasks while preserving the original syntactic structure. However, their goal is to preserve privacy via the modification of words that could disclose sensitive information.…”
Section: Related Workmentioning
confidence: 99%
“…This finding is quite close in spirit to our conclusion in Section 4.5. Hu et al (2020) also put forth efforts to modify the text in syntactic tasks while preserving the original syntactic structure. However, their goal is to preserve privacy via the modification of words that could disclose sensitive information.…”
Section: Related Workmentioning
confidence: 99%
“…To deal with explicit privacy leakage in NLP, Zhang et al (2018b) added DP noise to TF-IDF (Salton andMcGill, 1986) textual vectors, andHu et al (2020) obfuscated the text by substituting each word with a new word of similar syntactic role. However, both approaches suffer large utility loss when trying to ensure practical privacy.…”
Section: Privacy In Nlpmentioning
confidence: 99%
“…Adversarial learning Hu et al, 2020) has been used to address implicit leakage to learn representations that are invariant to private-sensitive attributes. Similarly, Mosallanezhad et al (2019) used reinforcement learning to automatically learn a strategy to reduce privateattribute leakage by playing against an attributeinference attacker.…”
Section: Privacy In Nlpmentioning
confidence: 99%