2019
DOI: 10.48550/arxiv.1903.09799
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Improving Adversarial Robustness via Guided Complement Entropy

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 15 publications
0
4
0
Order By: Relevance
“…From the defenders' side, recently proposed methods for improving the safety of deep learning systems include [2][3][4]9,12,18,19,21,26,27,31,32,34,37,40,43,45,47,50,53,56,57]. Most of these methods fall broadly into the following several classes: (1) adversarial training where the adversarial samples are used for retraining the deep learning systems [3,22,45,48,50]; (2) gradient masking where the deep learning system is designed to have an extremely flat loss function landscape with respect to the perturbations in input samples [4,40]; (3) feature discretization where we simply discretize the features of samples (both benign samples and adversarial samples) before we feed it to the deep learning systems [37,57]; (4) generative model based approach where we find a sample from the distribution of benign samples to approximate an arbitrary given sample, and then use the approximation as input for the deep learning systems [18,21,26,31,32,43,47].…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…From the defenders' side, recently proposed methods for improving the safety of deep learning systems include [2][3][4]9,12,18,19,21,26,27,31,32,34,37,40,43,45,47,50,53,56,57]. Most of these methods fall broadly into the following several classes: (1) adversarial training where the adversarial samples are used for retraining the deep learning systems [3,22,45,48,50]; (2) gradient masking where the deep learning system is designed to have an extremely flat loss function landscape with respect to the perturbations in input samples [4,40]; (3) feature discretization where we simply discretize the features of samples (both benign samples and adversarial samples) before we feed it to the deep learning systems [37,57]; (4) generative model based approach where we find a sample from the distribution of benign samples to approximate an arbitrary given sample, and then use the approximation as input for the deep learning systems [18,21,26,31,32,43,47].…”
Section: Related Workmentioning
confidence: 99%
“…following the standard Gaussian distribution N (0, 1). From the concentration of measure [15], for any positive < 1, 12 .…”
Section: Classifiers With Linear Compression Functionsmentioning
confidence: 99%
See 2 more Smart Citations