2021
DOI: 10.48550/arxiv.2101.02069
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Model Extraction and Defenses on Generative Adversarial Networks

Hailong Hu,
Jun Pang

Abstract: Model extraction attacks aim to duplicate a machine learning model through query access to a target model. Early studies mainly focus on discriminative models. Despite the success, model extraction attacks against generative models are less well explored. In this paper, we systematically study the feasibility of model extraction attacks against generative adversarial networks (GANs). Specifically, we first define accuracy and fidelity on model extraction attacks against GANs. Then we study model extraction att… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 45 publications
0
1
0
Order By: Relevance
“…In order to estimate MC scores, we first train a substitute model by querying the target model. The training process of the substitute model can be regarded as a model extraction attack against the GAN [7], which aims to duplicate the target GAN model including its functionality and implicit data distribution. In this way, data used for training the substitute model is considered as members of the model, and data sampled from the target model but not used for training the substitute model is regarded as nonmembers.…”
Section: Over-representation Based Attackmentioning
confidence: 99%
“…In order to estimate MC scores, we first train a substitute model by querying the target model. The training process of the substitute model can be regarded as a model extraction attack against the GAN [7], which aims to duplicate the target GAN model including its functionality and implicit data distribution. In this way, data used for training the substitute model is considered as members of the model, and data sampled from the target model but not used for training the substitute model is regarded as nonmembers.…”
Section: Over-representation Based Attackmentioning
confidence: 99%