2022
DOI: 10.1007/978-3-031-20065-6_12
|View full text |Cite
|
Sign up to set email alerts
|

Black-Box Dissector: Towards Erasing-Based Hard-Label Model Stealing Attack

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 15 publications
(8 citation statements)
references
References 16 publications
0
6
0
Order By: Relevance
“…Once the attacker obtains the knowledge of the training algorithm, it can fully control the compromised client and initiate the model poisoning backdoor attack during the local model training phase. The most commonly adopted strategy is the global model replacement method [184]. The attack model becomes a white box.…”
Section: Discussionmentioning
confidence: 99%
“…Once the attacker obtains the knowledge of the training algorithm, it can fully control the compromised client and initiate the model poisoning backdoor attack during the local model training phase. The most commonly adopted strategy is the global model replacement method [184]. The attack model becomes a white box.…”
Section: Discussionmentioning
confidence: 99%
“…With query access, Tramer et al [72] proposed a model stealing attack against a machine learning model. Since then, researchers in fields like computer vision [78,89], generative adversarial networks [26], and recommendation systems [90] have explored model stealing attacks. Traditional model stealing attacks assume that the attacker can obtain sufficient data to achieve their goal.…”
Section: Related Workmentioning
confidence: 99%
“…In addition, the clone model can be used for subsequent adversarial attacks [3,18], membership inference attacks [27,31], etc. Model stealing attacks have been widely studied in fields such as images [35,40] and graphs [6,38], while received little attention in recommender systems. Yue et al [42] proposed to utilize the autoregressive nature of sequential recommender system to steal its internal information.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Machine learning as a service (MLaaS) has gained significant popularity due to its ease of deployment and costeffectiveness, which provides users with pre-trained models and APIs. Unfortunately, MLaaS is susceptible to privacy attacks, with Model Stealing Attacks (MSA) being particularly harmful (Tramèr et al 2016;Orekondy, Schiele, and Fritz 2019;Jagielski et al 2020;Yuan et al 2022;Wang et al 2022), where an attacker can train a clone model by querying its public API, without accessing its parameters or training data. This attack not only poses a threat to intellectual property but also compromises the privacy of individuals whose data was used to train the original model.…”
Section: Introductionmentioning
confidence: 99%