2021
DOI: 10.48550/arxiv.2103.13612
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

THAT: Two Head Adversarial Training for Improving Robustness at Scale

Zuxuan Wu,
Tom Goldstein,
Larry S. Davis
et al.

Abstract: Many variants of adversarial training have been proposed, with most research focusing on problems with relatively few classes. In this paper, we propose Two Head Adversarial Training (THAT), a two-stream adversarial learning network that is designed to handle the large-scale manyclass ImageNet dataset. The proposed method trains a network with two heads and two loss functions; one to minimize feature-space domain shift between natural and adversarial images, and one to promote high classification accuracy. Thi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 31 publications
0
1
0
Order By: Relevance
“…Both methods proposed a new way to generate adversarial examples based on the CL loss instead of regular label-based losses(e.g., cross-entropy). [28] proposed to train two encoders: clean trained and adversarially trained, and use two loss functions: contrastive loss to minimize feature inconsistency between natural and adversarial samples, and CE loss to promote high classification accuracy. The adversarial samples are used in the self-supervised feature representation training to achieve feature robustness.…”
Section: Related Workmentioning
confidence: 99%
“…Both methods proposed a new way to generate adversarial examples based on the CL loss instead of regular label-based losses(e.g., cross-entropy). [28] proposed to train two encoders: clean trained and adversarially trained, and use two loss functions: contrastive loss to minimize feature inconsistency between natural and adversarial samples, and CE loss to promote high classification accuracy. The adversarial samples are used in the self-supervised feature representation training to achieve feature robustness.…”
Section: Related Workmentioning
confidence: 99%