2022 IEEE 28th International Symposium on on-Line Testing and Robust System Design (IOLTS) 2022
DOI: 10.1109/iolts56730.2022.9897693
|View full text |Cite
|
Sign up to set email alerts
|

A Closer Look at Evaluating the Bit-Flip Attack Against Deep Neural Networks

Abstract: Deep neural network models are massively deployed on a wide variety of hardware platforms. This results in the appearance of new attack vectors that significantly extend the standard attack surface, extensively studied by the adversarial machine learning community. One of the first attack that aims at drastically dropping the performance of a model, by targeting its parameters (weights) stored in memory, is the Bit-Flip Attack (BFA). In this work, we point out several evaluation challenges related to the BFA. … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(1 citation statement)
references
References 14 publications
0
1
0
Order By: Relevance
“…When dealing with parameter extraction, a usual assumption is that the adversary knows A M . This is the case for cryptanalysis-like approaches [2,7], active learning techniques [16,19] and recent efforts relying on physical attacks such as side-channel (SCA) [1,8] or fault injection (FIA) [18,6] analysis. Interestingly, whatever the adversarial goal, victim model architecture is a crucial information: it is compulsory for fidelity scenarios, and its knowledge significantly strengthen attacker's abilities to succeed in task-performance ones [15].…”
Section: Model Extractionmentioning
confidence: 99%
“…When dealing with parameter extraction, a usual assumption is that the adversary knows A M . This is the case for cryptanalysis-like approaches [2,7], active learning techniques [16,19] and recent efforts relying on physical attacks such as side-channel (SCA) [1,8] or fault injection (FIA) [18,6] analysis. Interestingly, whatever the adversarial goal, victim model architecture is a crucial information: it is compulsory for fidelity scenarios, and its knowledge significantly strengthen attacker's abilities to succeed in task-performance ones [15].…”
Section: Model Extractionmentioning
confidence: 99%