2020
DOI: 10.48550/arxiv.2007.00753
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Opportunities and Challenges in Deep Learning Adversarial Robustness: A Survey

Abstract: As we seek to deploy machine learning models beyond virtual and controlled domains, it is critical to analyze not only the accuracy or the fact that it works most of the time, but if such a model is truly robust and reliable. This paper studies strategies to implement adversary robustly trained algorithms towards guaranteeing safety in machine learning algorithms. We provide a taxonomy to classify adversarial attacks and defenses, formulate the Robust Optimization problem in a min-max setting, and divide it in… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
18
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
4
1

Relationship

1
8

Authors

Journals

citations
Cited by 17 publications
(22 citation statements)
references
References 71 publications
0
18
0
Order By: Relevance
“…Theorem 4.1: Consider a multi-layer NN Ψ : R nx → R ny described by (2), with nonlinear activation function sector bounded as in (12). Consider the matrix inequality…”
Section: Multi-layer Neural Networkmentioning
confidence: 99%
See 1 more Smart Citation
“…Theorem 4.1: Consider a multi-layer NN Ψ : R nx → R ny described by (2), with nonlinear activation function sector bounded as in (12). Consider the matrix inequality…”
Section: Multi-layer Neural Networkmentioning
confidence: 99%
“…Up till now, verification of NN has been primarily focused on adversarial robustness, and can be divided into exact and inexact approaches. Exact approaches calculate the NN output set without any approximation, whereas inexact methods seek to approximate the output set for computational tractability [12]. Moreover, deriving guarantees for nonlinear, large-scale, complex policies such as an NN is a significant technical challenge, and there have been increasing efforts towards this direction [14]- [16].…”
Section: Introductionmentioning
confidence: 99%
“…Defense on HAR presents several challenges. First, most existing AT methods seek to resist attacks by sampling the most aggressive adversarial sample [44], but ignore the overall distribution of adversarial samples. There are a few exceptions [14,66], but they are solely designed for static data and therefore assume a simple structure of the adversarial distribution.…”
Section: Introductionmentioning
confidence: 99%
“…Two of the most common applications of deep learning models are: computer vision (CV), where the goal is to teach machines how to see and perceive things like humans do; Natural Language Processing (NLP) and Natural Language Understanding (NLU), where the goal is to analyze and comprehend large amounts of natural language data. These deep learning models have achieved tremendous success in image recognition [6], [7], [8], speech recognition [9], [10], [11], [12], [13], natural language processing and understanding [14], [15], [16], [17], [18], video analytics [19], [20], [21], [22], [23], cyber security [24], [25], [26], [27], [28], [29], [30]. The most common approach towards machine and/or deep learning is supervised learning, where large number of data samples, towards a particular application, are collected along with their respective labels and formed as a dataset.…”
Section: Introductionmentioning
confidence: 99%