2021
DOI: 10.48550/arxiv.2109.05211
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

RobustART: Benchmarking Robustness on Architecture Design and Training Techniques

Abstract: Deep neural networks (DNNs) are vulnerable to adversarial noises, which motivates the benchmark of model robustness. Existing benchmarks mainly focus on evaluating the defenses, but there are no comprehensive studies of how architecture design and general training techniques affect robustness. Comprehensively benchmarking their relationships will be highly beneficial for better understanding and developing robust DNNs. Thus, we propose RobustART, the first comprehensive Robustness investigation benchmark on Im… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
20
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 16 publications
(22 citation statements)
references
References 63 publications
2
20
0
Order By: Relevance
“…Other advances include TRADE , FAT , GAIRAT (Zhang et al, 2021), (Song et al, 2020), Jiang et al (2021), Stutz et al (2020), Singla & Feizi (2020), Wu et al (2020a), and. Tang et al (2021) propose an adversarial robustness benchmark regarding architecture design and training techniques. Some works also attempt to interpret how machine learning models gain robustness (Ilyas et al, 2019;Zhang & Zhu, 2019).…”
Section: Related Workmentioning
confidence: 99%
“…Other advances include TRADE , FAT , GAIRAT (Zhang et al, 2021), (Song et al, 2020), Jiang et al (2021), Stutz et al (2020), Singla & Feizi (2020), Wu et al (2020a), and. Tang et al (2021) propose an adversarial robustness benchmark regarding architecture design and training techniques. Some works also attempt to interpret how machine learning models gain robustness (Ilyas et al, 2019;Zhang & Zhu, 2019).…”
Section: Related Workmentioning
confidence: 99%
“…System noises refer to the inaccuracy inherent to a system due to the inconsistent implementations of decoding and resize, such as ImageNet-S Wang et al (2021b); Tang et al (2021). The system noise adapted for the CIFAR10 dataset is created in a similar way in our framework.…”
Section: System Noisesmentioning
confidence: 99%
“…On the one hand, the robustness evaluation is not comprehensive. Current work evaluate the robustness under different adversarial attack and quantified metrics, but the robustness of the model also needs to consider the robust accuracy under other types of noises such as natural noise Hendrycks et al (2021a) and system noise Wang et al (2021a); Tang et al (2021). On the other hand, under adversarial noise, the robustness evaluation is not reliable.…”
Section: Introductionmentioning
confidence: 99%
“…Another important observation is that using WRNs instead of ResNets (RNs) can bring ∼ 3%-5% more robustness [16][17][18]. Other works also suggest that adversarial training requires deeper and wider models [15,26,55]. Also, the skip-connection operation used in WRN has been found can improve robustness for deeper architectures [56].…”
Section: Understanding Adversarially Trained Dnnsmentioning
confidence: 99%