Proceedings of the 1st International Workshop on Adversarial Learning for Multimedia 2021
DOI: 10.1145/3475724.3483601
|View full text |Cite
|
Sign up to set email alerts
|

Comparative Study of Adversarial Training Methods for Long-tailed Classification

Abstract: Adversarial training is originated in image classification to address the problem of adversarial attacks, where an invisible perturbation in an image leads to a significant change in model decision. It recently has been observed to be effective in alleviating the long-tailed classification problem, where an imbalanced size of classes makes the model has much lower performance on small classes. However, existing methods typically focus on the methods to generate perturbations for data, while the contributions o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3
1

Relationship

2
5

Authors

Journals

citations
Cited by 10 publications
(5 citation statements)
references
References 16 publications
0
5
0
Order By: Relevance
“…Methods rooted in contrastive learning enhance long-tailed classification performance by improving the selection of positive and negative samples and optimizing contrastive loss [30,31]. Furthermore, adversarial training can effectively distinguish between head and tail samples by introducing perturbations [9,10]. Apart from these methods, there are also approaches that leverage multi-expert mixture [32], causal inference [33], and biased optimization in long-tailed image classification.…”
Section: Long-tailed Image Classificationmentioning
confidence: 99%
See 2 more Smart Citations
“…Methods rooted in contrastive learning enhance long-tailed classification performance by improving the selection of positive and negative samples and optimizing contrastive loss [30,31]. Furthermore, adversarial training can effectively distinguish between head and tail samples by introducing perturbations [9,10]. Apart from these methods, there are also approaches that leverage multi-expert mixture [32], causal inference [33], and biased optimization in long-tailed image classification.…”
Section: Long-tailed Image Classificationmentioning
confidence: 99%
“…The accuracy score was used to evaluate the classification performance of the algorithms on the single-label dataset VireoFood-172, as in previous studies [9,42]. As mentioned in the protocol settings of NUS-WIDE, we also divided the classes of VireoFood-172 into three disjoint subsets: head classes (classes each with over 500 occurrences), medium classes (classes each with 300-500 occurrences), and tail classes (classes under 300 occurrences).…”
Section: Evaluation Protocolmentioning
confidence: 99%
See 1 more Smart Citation
“…In the training dataset attacker directly manipulated original text classified dataset input to evasion attacked text related to produce misclassification [12] . Retraining with adversarial example in the ML algorithm helps to reduce adversarial frame work risk and build robustness of evasion attack model training datasets [13] . The frame works structure to reduce adversary evasion attack cost and support to change the training dataset structure as the adversarial need.…”
Section: A Training Dataset Frame Workmentioning
confidence: 99%
“…Finally, the deep learning algorithms use CNNs [32,33] with network ensembles [22,27], multi-branch training [17,28], and pre-training [2,5], to classify haze images in an end-to-end manner, which typically achieve much better performance than the threshold-based and the handcrafted-feature-based methods. However, it has been observed that these models usually perform worse on the classification of light haze images, due to the fact that the background scene objects work as spurious causal features in classification [16,27,30] and the lack of integration of corresponding features [6,12,20].…”
Section: Introductionmentioning
confidence: 99%