2019 IEEE Symposium on Security and Privacy (SP) 2019
DOI: 10.1109/sp.2019.00023
|View full text |Cite
|
Sign up to set email alerts
|

DEEPSEC: A Uniform Platform for Security Analysis of Deep Learning Model

Abstract: Deep learning (DL) models are inherently vulnerable to adversarial examples -maliciously crafted inputs to trigger target DL models to misbehave -which significantly hinders the application of DL in security-sensitive domains. Intensive research on adversarial learning has led to an arms race between adversaries and defenders. Such plethora of emerging attacks and defenses raise many questions: Which attacks are more evasive, preprocessing-proof, or transferable? Which defenses are more effective, utility-pres… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
86
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
3
2

Relationship

3
6

Authors

Journals

citations
Cited by 122 publications
(98 citation statements)
references
References 32 publications
2
86
0
Order By: Relevance
“…We then perform adversarial training using 1 , 2 , ... and , yielding 1 , 2 , ... and , respectively, just like fixing software bugs disclosed by new test inputs. Then we use existing methods [37] to measure model robustness and study the correlations between robustness and coverage. Ideally, we would expect to see these models have increasing levels of robustness.…”
Section: Research Questionsmentioning
confidence: 99%
See 1 more Smart Citation
“…We then perform adversarial training using 1 , 2 , ... and , yielding 1 , 2 , ... and , respectively, just like fixing software bugs disclosed by new test inputs. Then we use existing methods [37] to measure model robustness and study the correlations between robustness and coverage. Ideally, we would expect to see these models have increasing levels of robustness.…”
Section: Research Questionsmentioning
confidence: 99%
“…They fall into three categories: model accuracy in the presence of adversarial examples, adversarial example impreceptiblity that measures if an adversarial example looks natural, and adversarial example robustness. These are the metrics commonly used by adversarial machine learning [37,39]. Details are explained in the subsections.…”
Section: Dnn Model Quality Metricsmentioning
confidence: 99%
“…However, such models are inherently vulnerable to adversarial inputs, which are maliciously crafted samples (typically by adding human-imperceptible noise to legitimate samples) to trigger target models to misbehave [17,47]. Despite the plethora of work on the image domain [24,28,32,34,45,56] and text domain [30,31], the research of adversarial attacks on the audio domain is still limited, due to a number of non-trivial challenges. First, the acoustic systems need to deal with information changes in the time dimension, which is more complex than image classication systems.…”
Section: Introductionmentioning
confidence: 99%
“…Another line of work attempts to improve DNN resilience against adversarial attacks by devising new training strategies (e.g., adversarial training) [22,28,39,49] or detection mechanisms [19,33,35,53]. However, the existing defenses are often penetrated or circumvented by even stronger attacks [2,30], resulting in a constant arms race between the attackers and defenders.…”
Section: Related Workmentioning
confidence: 99%