2021 IEEE European Symposium on Security and Privacy (EuroS&P) 2021
DOI: 10.1109/eurosp51992.2021.00025
|View full text |Cite
|
Sign up to set email alerts
|

On the (In)Feasibility of Attribute Inference Attacks on Machine Learning Models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
7
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 17 publications
(7 citation statements)
references
References 15 publications
0
7
0
Order By: Relevance
“…To generate high utility data whose distribution is similar to the original data distribution, these DP methods guarantee DP with large values of . As described in [ 7 ], large values of cause to expose sensitive information under privacy attacks targeting data. On the other hand, while generating data with high privacy guarantees, these DP methods decrease the utility of data.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…To generate high utility data whose distribution is similar to the original data distribution, these DP methods guarantee DP with large values of . As described in [ 7 ], large values of cause to expose sensitive information under privacy attacks targeting data. On the other hand, while generating data with high privacy guarantees, these DP methods decrease the utility of data.…”
Section: Related Workmentioning
confidence: 99%
“…Furthermore, another factor of concern is that these anonymization techniques should not only protect data where sensitive information is identified from the dataset, but also data where sensitive information is identifiablefrom the correlation of multiple datasets. However, many studies showed that ML models are vulnerable to an increasing array of various privacy attacks willing to expose an individual [ 6 , 7 ].…”
Section: Introductionmentioning
confidence: 99%
“…Nevertheless, we also specifically examine the impact of updating the NER model with different phrases (variants of the surrounding sentence) containing the new named entity in Section 4.3. Note that unknown surrounding sentence variant of the attack makes it more of an attribute inference attack [63,66] rather than a membership inference attack.…”
Section: Threat Modelmentioning
confidence: 99%
“…We also demonstrate that it is possible to infer words close to the secret words, which helps an attacker find the target password even when it is not in her initial dictionary of possible passwords. This is known as attribute inference in the literature [63,66]. Next we demonstrate our attack on a practical use-case of document redaction, whereby the NER model is updated on a publicly available medical dataset, which we assume to be a 1 See Section 3 for the precise definition of the input and output.…”
Section: Introductionmentioning
confidence: 99%
“…MIAs for ML: An MIA was introduced in the ML setting by Shokri et al [63] (their attack is called NN-based attack) and its formal definition was given by Yeom et al [83]. The relationship between MIA and other notions such as DP [15,16] and attribute inference attacks [19,20] were shown by Yeom et al [83] and Zhao et al [86].…”
Section: Related Workmentioning
confidence: 99%