Proceedings of the 30th ACM SIGSOFT International Symposium on Software Testing and Analysis 2021
DOI: 10.1145/3460319.3464816
|View full text |Cite
|
Sign up to set email alerts
|

ModelDiff: testing-based DNN similarity comparison for model reuse detection

Abstract: The knowledge of a deep learning model may be transferred to a student model, leading to intellectual property infringement or vulnerability propagation. Detecting such knowledge reuse is nontrivial because the suspect models may not be white-box accessible and/or may serve different tasks. In this paper, we propose Mod-elDiff, a testing-based approach to deep learning model similarity comparison. Instead of directly comparing the weights, activations, or outputs of two models, we compare their behavioral patt… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 21 publications
(7 citation statements)
references
References 72 publications
0
6
0
Order By: Relevance
“…Our attack can certainly incorporate other inference targets as well (see Section 9 for some discussion). Note that model type/hyperparameter stealing is well recognized by the scientific community [9,29,30,42].…”
Section: Threat Modelmentioning
confidence: 99%
See 2 more Smart Citations
“…Our attack can certainly incorporate other inference targets as well (see Section 9 for some discussion). Note that model type/hyperparameter stealing is well recognized by the scientific community [9,29,30,42].…”
Section: Threat Modelmentioning
confidence: 99%
“…1 Our code is available at https://github.com/boz083/Plot_Steal. Recent studies demonstrate that ML models are vulnerable to information stealing attacks, such as model type [9,29] and hyperparameters [30,42]. These attacks first leverage a dataset to query a target ML model and obtain the responses.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…To make the fingerprinting robust to model distillation, Lukas et al [30] further propose the conferrable adversarial examples as the model fingerprints. To detect the plagiarism models which have different output space from the source model, Li et al [27] proposed a model fingerprinting scheme that compares the behavioral patterns on a set of normal and special adversarial inputs. As described above, optimizing such trigger sets is both time-consuming and model-specific.…”
Section: Model Fingerprintingmentioning
confidence: 99%
“…Also, there has been many evaluations, surveys or reviews the existing state of the art models like [2], [3], [4], [5] to just mention a few recent works. Having so many existing solutions there are even methods to compare the models between each other based on their behavioral responses [6]. The majority of these solutions are based on the supervised approach since, due to its deterministic nature, are easier to evaluate.…”
Section: Introductionmentioning
confidence: 99%