2020
DOI: 10.48550/arxiv.2009.08435
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Large Norms of CNN Layers Do Not Hurt Adversarial Robustness

Abstract: Since the Lipschitz properties of convolutional neural network (CNN) are widely considered to be related to adversarial robustness, we theoretically characterize the 1 norm and ∞ norm of 2D multi-channel convolutional layers and provide efficient methods to compute the exact 1 norm and ∞ norm. Based on our theorem, we propose a novel regularization method termed norm decay, which can effectively reduce the norms of CNN layers. Experiments show that normregularization methods, including norm decay, weight decay… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 18 publications
0
2
0
Order By: Relevance
“…The method is based on the optimization of a loss taking into account both natural accuracy and the adversarial robustness. Finally, another line of research is devoted to the analysis and estimation of the Lipschitz constant of DNNs, as a guarantee of stability and robustness to be enforced during training [23,24,25]. The present work complements prior studies on the properties of robust models and provide arguments to resolve contrasting results found in the literature.…”
Section: Related Workmentioning
confidence: 74%
See 1 more Smart Citation
“…The method is based on the optimization of a loss taking into account both natural accuracy and the adversarial robustness. Finally, another line of research is devoted to the analysis and estimation of the Lipschitz constant of DNNs, as a guarantee of stability and robustness to be enforced during training [23,24,25]. The present work complements prior studies on the properties of robust models and provide arguments to resolve contrasting results found in the literature.…”
Section: Related Workmentioning
confidence: 74%
“…Our findings suggest a novel direction for investigation about the mechanism through which local robustness may be implemented by adversarially-trained CNNs, namely a coupling between feature maps. Notice that in [25], the authors propose -altough for a very simple DNNa coupling between subsequent layers as a viable solution for achieving small Lipschitz constants, regardless of their norm. Results presented above suggest that AT might exploit similar schemes.…”
Section: Feature Maps Redundancymentioning
confidence: 99%