Recent research on vision-based tasks has achieved great improvement due to the development of deep learning solutions. However, deep models have been found vulnerable to adversarial attacks where the original inputs are maliciously manipulated and cause dramatic shifts to the outputs. In this paper, we focus on adversarial attacks in image classifiers built with deep neural networks and propose a model-agnostic approach to detect adversarial inputs. We argue that the logit semantics of adversarial inputs follow a different evolution with respect to original inputs, and construct a logits-based embedding of features for effective representation learning. We train an LSTM network to further analyze the sequence of logitsbased features to detect adversarial examples. Experimental results on the MNIST, CIFAR-10, and CIFAR-100 datasets show that our method achieves state-of-the-art accuracy for detecting adversarial examples and has strong generalizability.
High cost of training time caused by multi-step adversarial example generation is a major challenge in adversarial training. Previous methods try to reduce the computational burden of adversarial training using single-step adversarial example generation schemes, which can effectively improve the efficiency but also introduce the problem of "catastrophic overfitting", where the robust accuracy against Fast Gradient Sign Method (FGSM) can achieve nearby 100% whereas the robust accuracy against Projected Gradient Descent (PGD) suddenly drops to 0% over a single epoch.To address this problem, we propose a novel Fast Gradient Sign Method with PGD Regularization (FGSMPR) to boost the efficiency of adversarial training without catastrophic overfitting. Our core idea is that single-step adversarial training can not learn robust internal representations of FGSM and PGD adversarial examples. Therefore, we design a PGD regularization term to encourage similar embeddings of FGSM and PGD adversarial examples. The experiments demonstrate that our proposed method can train a robust deep network for L ∞ -perturbations with FGSM adversarial training and reduce the gap to multi-step adversarial training.
In response to the threat of adversarial examples, adversarial training provides an attractive option for enhancing the model robustness by training models on onlineaugmented adversarial examples. However, most of the existing adversarial training methods focus on improving the robust accuracy by strengthening the adversarial examples but neglecting the increasing shift between natural data and adversarial examples, leading to a dramatic decrease in natural accuracy. To maintain the trade-off between natural and robust accuracy, we alleviate the shift from the perspective of feature adaption and propose a Feature Adaptive Adversarial Training (FAAT) optimizing the class-conditional feature adaption across natural data and adversarial examples. Specifically, we propose to incorporate a classconditional discriminator to encourage the features become (1) class-discriminative and (2) invariant to the change of adversarial attacks. The novel FAAT framework enables the trade-off between natural and robust accuracy by generating features with similar distribution across natural and adversarial data, and achieve higher overall robustness benefited from the class-discriminative feature characteristics. Experiments on various datasets demonstrate that FAAT produces more discriminative features and performs favorably against state-of-the-art methods. Codes are available at https://github.com/VisionFlow/FAAT.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.