2020
DOI: 10.1007/978-3-030-52243-8_1
|View full text |Cite
|
Sign up to set email alerts
|

Preventing Neural Network Weight Stealing via Network Obfuscation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(9 citation statements)
references
References 2 publications
0
9
0
Order By: Relevance
“…During the verification phase, watermark can be taken out a marking layer's weight. With the aim of being robust and not impacting accuracy, the RIGA watermarking with competitive training strategy for white-box scenarios [24]. Fig.…”
Section: Evaluation Of Dnn Ip Protection Methodsmentioning
confidence: 99%
“…During the verification phase, watermark can be taken out a marking layer's weight. With the aim of being robust and not impacting accuracy, the RIGA watermarking with competitive training strategy for white-box scenarios [24]. Fig.…”
Section: Evaluation Of Dnn Ip Protection Methodsmentioning
confidence: 99%
“…Existing model defense methods can be categorized into two different groups: (1) defend against query-based attacks, and (2) defend against side-channel attacks. For defending against querybased attacks, some studies [21,29,31,41,42,51] propose different strategies to degenerate the effectiveness of query-based model extraction. While other studies [41,42,51] propose methods to train a simulating model, which has similar performance to the original model, but is more resilient to query-based attacks.…”
Section: Existing Model Parsing and Defensesmentioning
confidence: 99%
“…For defending against querybased attacks, some studies [21,29,31,41,42,51] propose different strategies to degenerate the effectiveness of query-based model extraction. While other studies [41,42,51] propose methods to train a simulating model, which has similar performance to the original model, but is more resilient to query-based attacks. For securing 1 https://www.tensorflow.org/lite/convert/index 2 https://google.github.io/flatbuffers/ 3 schema file (The link is too long to display) the AI model at the side-channel level, a recent work modifies the CPU and Memory costs to resist the model extraction attacks [25].…”
Section: Existing Model Parsing and Defensesmentioning
confidence: 99%
“…Table 6 displays a proposed taxonomy and classifies defence approaches. [48,[101][102][103] An important distinction between defences is on the mode and goal: reactive, i.e. detection of an (ongoing or past) attack, or pro-active, i.e.…”
Section: Defences Against Model Stealingmentioning
confidence: 99%
“…Szentannai et al [103] implemented a defence for NNs with fully connected layers. The authors proposed to transform a model into a functionally equivalent model, but with so-called sensitive weights, which make the model less robust and thus functionality stealing more difficult.…”
Section: Data Perturbationmentioning
confidence: 99%