2019
DOI: 10.48550/arxiv.1905.00877
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

You Only Propagate Once: Accelerating Adversarial Training via Maximal Principle

Abstract: Deep learning achieves state-of-the-art results in many tasks in computer vision and natural language processing. However, recent works have shown that deep networks can be vulnerable to adversarial perturbations, which raised a serious robustness issue of deep networks. Adversarial training, typically formulated as a robust optimization problem, is an effective way of improving the robustness of deep networks. A major drawback of existing adversarial training algorithms is the computational overhead of the ge… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
65
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
4
1

Relationship

1
9

Authors

Journals

citations
Cited by 44 publications
(65 citation statements)
references
References 22 publications
0
65
0
Order By: Relevance
“…Due to SISR being a regression task, we can only compare our adversarial attack with FGSM (Goodfellow, Shlens, and Szegedy 2015) and PGD (Madry et al 2018), since the more recent alternatives, such as TRADES (Zhang et al 2019b) and YOPO (Zhang et al 2019a), are tailored for classification tasks.…”
Section: Methodsmentioning
confidence: 99%
“…Due to SISR being a regression task, we can only compare our adversarial attack with FGSM (Goodfellow, Shlens, and Szegedy 2015) and PGD (Madry et al 2018), since the more recent alternatives, such as TRADES (Zhang et al 2019b) and YOPO (Zhang et al 2019a), are tailored for classification tasks.…”
Section: Methodsmentioning
confidence: 99%
“…Existing adversarial defense methods can be roughly divided into two classes: attacking stage defense and testing stage defense. The adversarial training [41,62,51,28] is an effective way to improve model's robustness, which have two stages defense effect. There are other defense methods have two stages defense effect.…”
Section: Related Workmentioning
confidence: 99%
“…Madry et al [10] developed it by training deep models with stronger adversaries generated by PGD. Subsequent works mainly focus on accelerating training [27], [28] and improving the resistance [29], [30], [31]. Zhang et al [32] characterized the trade-off between accuracy and robustness and proposed TRADES, which optimizes a regularized surrogate loss to improve the adversarial robustness of DNNs.…”
Section: B Adversarial Trainingmentioning
confidence: 99%