Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security 2018
DOI: 10.1145/3243734.3243757
|View full text |Cite
|
Sign up to set email alerts
|

Model-Reuse Attacks on Deep Learning Systems

Abstract: Many of today's machine learning (ML) systems are built by reusing an array of, often pre-trained, primitive models, each fulfilling distinct functionality (e.g., feature extraction). The increasing use of primitive models significantly simplifies and expedites the development cycles of ML systems. Yet, because most of such models are contributed and maintained by untrusted sources, their lack of standardization or regulation entails profound security implications, about which little is known thus far.In this … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
120
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
2
2

Relationship

1
8

Authors

Journals

citations
Cited by 147 publications
(121 citation statements)
references
References 40 publications
1
120
0
Order By: Relevance
“…Ji et al [44] proposed classification of security threats for Deep Learning in 3 different angles, which influence classifieds, security breaches, and privacy of attacks.…”
Section: Security Attack Taxonomymentioning
confidence: 99%
See 1 more Smart Citation
“…Ji et al [44] proposed classification of security threats for Deep Learning in 3 different angles, which influence classifieds, security breaches, and privacy of attacks.…”
Section: Security Attack Taxonomymentioning
confidence: 99%
“…e software bugs of the systems that hosted Deep Learning applications on its operating system can be hijacked due to remote compromise and application bugs [44,82]. is mostly happens when the system is connected with the cloud system and the Deep Learning applications are also running on that cloud-based system.…”
Section: Deep Learning Reat Type-iiimentioning
confidence: 99%
“…Conversely, if the attack is targeted to only have few samples misclassified at test time, then it is named as a poisoning integrity attack [7,8,11,15,17,45,46,49,50,69]. We also include backdoor attacks into this category, as they also aim to cause specific misclassifications at test time by compromising the training process of the learning algorithm [22,43,47,57]. Their underlying idea is to compromise a model during the training phase or, more generally, at design time (this may include, e.g., also modifications to the architecture of a deep network by addition of specific layers or neurons), with the goal of enforcing the model to classify backdoored samples at test time as desired by the attacker.…”
Section: Model Extraction/stealing Model Inversion Membership Inferencementioning
confidence: 99%
“…They show that their attack performance degrades if several layers of the student models are fine-tuned. Ji et al [23] maliciously train pre-trained models in order to implement model-reuse attacks against ML systems without knowing the developer's dataset or fine-tuning strategies. Hashemi et al [17] query the target model with images that come from a similar distribution as the training images of the target model, augment the dataset with random noise and use this augmented dataset to train a substitute model.…”
Section: Related Workmentioning
confidence: 99%