Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security 2015
DOI: 10.1145/2810103.2813677
|View full text |Cite
|
Sign up to set email alerts
|

Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
1,609
0
4

Year Published

2017
2017
2023
2023

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 2,037 publications
(1,749 citation statements)
references
References 20 publications
1
1,609
0
4
Order By: Relevance
“…Model inversion has also been applied to face recognition models [16]. In this scenario, the model's output is set to 1 for class i and 0 for the rest, and model inversion is used to construct an input that produces these outputs.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Model inversion has also been applied to face recognition models [16]. In this scenario, the model's output is set to 1 for class i and 0 for the rest, and model inversion is used to construct an input that produces these outputs.…”
Section: Related Workmentioning
confidence: 99%
“…If the images in a class are diverse (e.g., if the class contains multiple individuals or many different objects), the results of model inversion as used in [16] are semantically meaningless and not recognizable as any specific image from the training dataset. To illustrate this, we ran model inversion against a convolutional neural network 13 trained on the CIFAR-10 dataset, which is a standard benchmark for object recognition models.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…The shared parameters were averaged and passed back to the entities for the next iteration. Model inversion attacks were demonstrated in [11]. The authors showed that shared models leak information and are vulnerable even against a "black-box" adversary (that interacts with the model only via inputs and outputs).…”
Section: B Prior Approaches For Privacy-aware Distributed Learningmentioning
confidence: 99%
“…Here, the output function is a list of (anonymized) inputs, whereas we consider arithmetic computations leading to relatively small outputs. In model inversion [17], machine-learning algorithms are used to deduce sensitive personal information from various outputs. This is different from finding personal inputs to a specific output function.…”
Section: Related Workmentioning
confidence: 99%