2020
DOI: 10.1007/978-3-030-63820-7_65
|View full text |Cite
|
Sign up to set email alerts
|

Training Lightweight yet Competent Network via Transferring Complementary Features

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(7 citation statements)
references
References 9 publications
0
4
0
Order By: Relevance
“…To alleviate AT's time complexity, researchers attempted to attain state-of-the-art robustness with single-step attacks (Zhang et al, 2019a;Shafahi et al, 2019;Wong et al, 2020) by adopting accumulative perturbation and perturbation initialization. Shafahi et al (2019) employed a single backpropagation step to update the model weights and generate the adversarial perturbations.…”
Section: Literature Reviewmentioning
confidence: 99%
“…To alleviate AT's time complexity, researchers attempted to attain state-of-the-art robustness with single-step attacks (Zhang et al, 2019a;Shafahi et al, 2019;Wong et al, 2020) by adopting accumulative perturbation and perturbation initialization. Shafahi et al (2019) employed a single backpropagation step to update the model weights and generate the adversarial perturbations.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Despite the popularity of adversarial training to defend models, it has a high cost of generating strong adversarial examples, Free Adversarial Training (Free-AT) [92] uses the gradient information when updating model parameters to generate the adversarial examples, eliminating the previously mentioned overhead. Another optimization regarding the computational costs, consists in only considering the first layer of a network for forward and backpropagation, effectively reducing the amount of propagation to one in each update, named as You Only Propagate Once (YOPO) [93], due to the adversary update being only related to the first layer. Exploring the different layers of a network, Latent Adversarial Training (LAT) [94] consists of finetuning adversarial-trained models to ensure robustness at the latent level, because the latent layer is significantly vulnerable to adversarial perturbations of small magnitude.…”
Section: A Adversarial Trainingmentioning
confidence: 99%
“…Free AT [26] updates network parameters while generating AEs at the same time. YOPO [27] restricts most of the forward and back propagation within the first layer of the network during AE updates. Fast AT [7] replace multi-step attacks with FGSM and generate AEs with single-step gradient.…”
Section: B Efficient Adversarial Trainingmentioning
confidence: 99%
“…Since the network interacts directly with the perturbation via the first layer, we concentrate our analysis on the first layer, as done in [27]. To further study the non-linear characteristics of the network, the features of all training data after the first activation layer are extracted.…”
Section: Self-fitting Phenomenon In Fatmentioning
confidence: 99%