2022
DOI: 10.48550/arxiv.2202.09300
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Exploring Adversarially Robust Training for Unsupervised Domain Adaptation

Abstract: Unsupervised Domain Adaptation (UDA) methods aim to transfer knowledge from a labeled source domain to an unlabeled target domain. UDA has been extensively studied in the computer vision literature. Deep networks have been shown to be vulnerable to adversarial attacks. However, very little focus is devoted to improving the adversarial robustness of deep UDA models, causing serious concerns about model reliability.Adversarial Training (AT) has been considered to be the most successful adversarial defense approa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
0
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 19 publications
0
0
0
Order By: Relevance
“…Meanwhile, the main challenges for incorporating AT in UDA is the missing label information in target domain while AT needs ground-truth labels for generating adversarial examples. To address this issue, existing methods either skip the AT (Awais et al 2021) or use self-supervised methods (Lo and Patel 2022) to generate adversarial examples. For instance, Awais et al (Awais et al 2021) directly explored the robustness transfer in UDA process instead of using the AT, and proposed to use an external pre-trained robust model for robust feature distillation during UDA process.…”
Section: Adversarial Robustness Of Uda Modelsmentioning
confidence: 99%
See 1 more Smart Citation
“…Meanwhile, the main challenges for incorporating AT in UDA is the missing label information in target domain while AT needs ground-truth labels for generating adversarial examples. To address this issue, existing methods either skip the AT (Awais et al 2021) or use self-supervised methods (Lo and Patel 2022) to generate adversarial examples. For instance, Awais et al (Awais et al 2021) directly explored the robustness transfer in UDA process instead of using the AT, and proposed to use an external pre-trained robust model for robust feature distillation during UDA process.…”
Section: Adversarial Robustness Of Uda Modelsmentioning
confidence: 99%
“…Despite its effectiveness, its performance is limited by the teacher model's perturbation budget and sensitive to the architecture of teacher model. On the other hand, Lo et al (Lo and Patel 2022) proposed to use a self-supervised adversarial example generation for injection of AT into UDA. Sadly, such adversarial example generation cannot guarantee the inner-maximization in AT, thus leading to unsatisfied model robustness.…”
Section: Adversarial Robustness Of Uda Modelsmentioning
confidence: 99%