2023
DOI: 10.48550/arxiv.2302.11963
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Investigating Catastrophic Overfitting in Fast Adversarial Training: A Self-fitting Perspective

Abstract: Although fast adversarial training provides an efficient approach for building robust networks, it may suffer from a serious problem known as catastrophic overfitting (CO), where the multi-step robust accuracy suddenly collapses to zero. In this paper, we for the first time decouple the FGSM examples into data-information and self-information, which reveals an interesting phenomenon called "self-fitting". Self-fitting, i.e., DNNs learn the self-information embedded in single-step perturbations, naturally leads… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 19 publications
(24 reference statements)
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?