2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.00855
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial Robustness under Long-Tailed Distribution

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
21
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 55 publications
(22 citation statements)
references
References 18 publications
0
21
1
Order By: Relevance
“…In addition, there are several approaches that employ Causal Inference [145], Adversarial Training [171], Distributional Robust Optimization (DRO) [40] etc. to solve the long-tailed problem.…”
Section: More Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…In addition, there are several approaches that employ Causal Inference [145], Adversarial Training [171], Distributional Robust Optimization (DRO) [40] etc. to solve the long-tailed problem.…”
Section: More Methodsmentioning
confidence: 99%
“…Wu et al [171] find that long-tailed data have a negative impact on adversarial robustness and that the natural accuracy loss of the tail classes is further magnified in adversarial training. Meanwhile, they argue that suitable features as well as classifier embedding help to reduce the boundary error, and the combination of long-tailed recognition methods with the adversarial training framework helps to improve the natural accuracy.…”
Section: More Methodsmentioning
confidence: 99%
“…We also investigate the combination of the state-of-the-art centralized AT methods with FL, i.e., we apply standard PGD [21], TRADES [37], MART [31]), and GAIRAT [38] to FL, and term them as FedPGD, FedTRADES, FedMART, and FedGAIRAT. Additionally, we apply the state-of-the-art long-tail learning methods (LogitAdj [23] and RoBal [33]) to FAT, and term them as FedLogitAdj and FedRoBal.…”
Section: Details Of Calfatmentioning
confidence: 99%
“…)HG*$,5$7 )HG5%1 &DO)$7RXUV Comparison with state-of-the-art long-tail learning methods We also adapt the losses of long-tail learning methods (LogitAdj [23] and RoBal [33]) to FAT (namely FedLogitAdj and FedRoBal) for both the local inner and outer optimization. As shown in Table 3, both methods have lower natural and robust accuracy than our CalFAT.…”
Section: Andrppxqlfdwlrqurxqg 5rexvwdffxudf\mentioning
confidence: 99%
“…Due to the paucity of training examples, generalisation for tail classes is challenging; moreover, naïve learning on such data is susceptible to an undesirable bias towards head classes. Recently, long-tailed learning (LTL) has gained renewed interest in the context of deep neural networks [6,7,8,9,10,11,12,13]. Two active strands of work involve normalisation of the classifier's weights, and modification of the underlying loss to account for different class penalties.…”
Section: Introductionmentioning
confidence: 99%