2020
DOI: 10.48550/arxiv.2007.05123
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Improving Adversarial Robustness by Enforcing Local and Global Compactness

Abstract: The fact that deep neural networks are susceptible to crafted perturbations severely impacts the use of deep learning in certain domains of application. Among many developed defense models against such attacks, adversarial training emerges as the most successful method that consistently resists a wide range of attacks. In this work, based on an observation from a previous study that the representations of a clean data example and its adversarial examples become more divergent in higher layers of a deep neural … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
8
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
2

Relationship

2
0

Authors

Journals

citations
Cited by 2 publications
(8 citation statements)
references
References 22 publications
0
8
0
Order By: Relevance
“…The experiments are conducted on the CIFAR10 and CIFAR100 datasets. The comparison on Tables (4,5,6) shows that our ASCL method significantly improves both adversarial training based models by around 4% to 5% robust accuracy. Moreover, our ASCL also outperforms the ADR method by around 1% to 2% with ResNet20 and 2.6% with ResNet50.…”
Section: Robustness Evaluationmentioning
confidence: 97%
See 3 more Smart Citations
“…The experiments are conducted on the CIFAR10 and CIFAR100 datasets. The comparison on Tables (4,5,6) shows that our ASCL method significantly improves both adversarial training based models by around 4% to 5% robust accuracy. Moreover, our ASCL also outperforms the ADR method by around 1% to 2% with ResNet20 and 2.6% with ResNet50.…”
Section: Robustness Evaluationmentioning
confidence: 97%
“…We apply the two versions, ASCL and Leaked-ASCL, as a regularization on top of two adversarial training methods, PGD adversarial training (ADV) [22] and TRADES [36]. We compare our methods with ADR which is the state-ofthe-art regularization technique as proposed in [4]. The experiments are conducted on the CIFAR10 and CIFAR100 datasets.…”
Section: Robustness Evaluationmentioning
confidence: 99%
See 2 more Smart Citations
“…Many ADV's variants have been developed including but not limited to: (1) difference in the choice of adversarial examples, e.g., the worst-case examples (I. J. Goodfellow, Shlens, and Szegedy, 2015) or most divergent examples (Hongyang Zhang et al, 2019), (2) difference in the searching of adversarial examples, e.g., non-iterative FGSM, Rand FGSM with random initial point or PGD with multiple iterative gradient descent steps (Madry et al, 2018;Shafahi et al, 2019), (3) difference in additional regularizations, e.g., adding constraints in the latent space (Haichao Zhang and Wang, 2019;Bui et al, 2020), (4) difference in model architecture, e.g., activation function (Xie et al, 2020) or ensemble models (Pang, Xu, et al, 2019).…”
Section: Adversarial Trainingmentioning
confidence: 99%