“…Broadly speaking, such efforts in the literature are either focused on developing new attacks or better training models to make models resistant to such attacks (i.e., defenses) [13]. To sum up the research efforts dedicated understanding robustness in the literature, there are several research surveys that have addressed specific aspects of NLP robustness, e.g., data augmentation [14], search methods [15], pretrained models [16], and adversarial attacks [17]. However, the literature lacks research studies that provide a systematic overview of the state-of-the-art in this space across a range of variables; applications, technique, metrics, benchmark datasets, threat models, tasks, embedding techniques, learning techniques, goals, defense mechanisms, and performance.…”