Proceedings of the 5th Joint International Conference on Data Science &Amp; Management of Data (9th ACM IKDD CODS and 27th COMA 2022
DOI: 10.1145/3493700.3493705
|View full text |Cite
|
Sign up to set email alerts
|

AdvCodeMix: Adversarial Attack on Code-Mixed Data

Abstract: Research on adversarial attacks are becoming widely popular in the recent years. One of the unexplored areas where prior research is lacking is the effect of adversarial attacks on code-mixed data. Therefore, in the present work, we have explained the first generalized framework on text perturbation to attack code-mixed classification models in a black-box setting. We rely on various perturbation techniques that preserve the semantic structures of the sentences and also obscure the attacks from the perception … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(5 citation statements)
references
References 20 publications
0
5
0
Order By: Relevance
“…Increasing phenomena of code-mixing on social media platforms have also motivated researchers to analyze the adversarial robustness of code-mixed models. Authors in (Das et al, 2022) exposed the vulnerability of code-mixed classifiers by performing an adversarial attack based on subword perturbations, character repetition, and word language change. However, there is no attempt to enhance the adversarial robustness of code-mixed text classifiers against these perturbations.…”
Section: Related Workmentioning
confidence: 99%
See 4 more Smart Citations
“…Increasing phenomena of code-mixing on social media platforms have also motivated researchers to analyze the adversarial robustness of code-mixed models. Authors in (Das et al, 2022) exposed the vulnerability of code-mixed classifiers by performing an adversarial attack based on subword perturbations, character repetition, and word language change. However, there is no attempt to enhance the adversarial robustness of code-mixed text classifiers against these perturbations.…”
Section: Related Workmentioning
confidence: 99%
“…, w n , with ground truth label y, and a target model M (S) = y, the goal of the adversary is to perform an untargeted attack, i.e., find an adversarial sample S adv , causing M to perform misclassification, i.e., M (S)! = y. Adversaries attack the model using phonetic perturbations in line with the prior work Das et al (2022). Design goals: Based on the aforementioned adversary model, our proposed framework (SMLM and SAMLM) must meet the robustness and accuracy requirements.…”
Section: Threat Modelmentioning
confidence: 99%
See 3 more Smart Citations