2019
DOI: 10.48550/arxiv.1905.08232
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Adversarially robust transfer learning

Abstract: Transfer learning, in which a network is trained on one task and re-purposed on another, is often used to produce neural network classifiers when data is scarce or full-scale training is too costly. When the goal is to produce a model that is not only accurate but also adversarially robust, data scarcity and computational limitations become even more cumbersome. We consider robust transfer learning, in which we transfer not only performance but also robustness from a source model to a target domain. We start b… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
16
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 13 publications
(16 citation statements)
references
References 16 publications
0
16
0
Order By: Relevance
“…Their findings theoretically back the efficacy of adversarial training for robustness. In [356], it is also demonstrated that transfer learning on adversarially robust models retains (to an extent) the robustness effect for the target domain. Sehwag et al [357] also devised a method for an adversarial trainingaware model pruning in resource constrained environment.…”
Section: A Model Alteration For Defensementioning
confidence: 95%
“…Their findings theoretically back the efficacy of adversarial training for robustness. In [356], it is also demonstrated that transfer learning on adversarially robust models retains (to an extent) the robustness effect for the target domain. Sehwag et al [357] also devised a method for an adversarial trainingaware model pruning in resource constrained environment.…”
Section: A Model Alteration For Defensementioning
confidence: 95%
“…There are some recent works studying when and how adversarial robustness will transfer in different machine learning settings, such as transfer learning (Hendrycks et al, 2019;Shafahi et al, 2019), representation learning (Chan et al, 2020) and Model-agnostic meta-learning (MAML) (Wang et al, 2021a). In contrast, we focus on the setting of knowledge distillation.…”
Section: Related Workmentioning
confidence: 99%
“…The current literature considers the scenario where the model is adversarially trained on both the source and target datasets to obtain better robustness [6]. Or where the model is adversarially trained on the source dataset and naturally trained on the target dataset [16]. In these approaches, while training on the target datasets may be fast because many epochs are not required, training the model on the source dataset is costly.…”
Section: Related Workmentioning
confidence: 99%