Proceedings of the 24th Conference on Computational Natural Language Learning 2020
DOI: 10.18653/v1/2020.conll-1.48
|View full text |Cite
|
Sign up to set email alerts
|

An Empirical Study on Model-agnostic Debiasing Strategies for Robust Natural Language Inference

Abstract: The prior work on natural language inference (NLI) debiasing mainly targets at one or few known biases while not necessarily making the models more robust. In this paper, we focus on the model-agnostic debiasing strategies and explore how to (or is it possible to) make the NLI models robust to multiple distinct adversarial attacks while keeping or even strengthening the models' generalization power. We firstly benchmark prevailing neural NLI models including pretrained ones on various adversarial datasets. We … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
19
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
7
1

Relationship

2
6

Authors

Journals

citations
Cited by 11 publications
(22 citation statements)
references
References 43 publications
0
19
0
Order By: Relevance
“…Biases in NLI Table 5: Results on the NLI adversarial test benchmark (Liu et al, 2020b). We compare with the data augmentation techniques investigated by Liu et al (2020b).…”
Section: Adversarial Tests For Combating Distinctmentioning
confidence: 99%
See 2 more Smart Citations
“…Biases in NLI Table 5: Results on the NLI adversarial test benchmark (Liu et al, 2020b). We compare with the data augmentation techniques investigated by Liu et al (2020b).…”
Section: Adversarial Tests For Combating Distinctmentioning
confidence: 99%
“…Biases in NLI Table 5: Results on the NLI adversarial test benchmark (Liu et al, 2020b). We compare with the data augmentation techniques investigated by Liu et al (2020b). * are reported results and underscore indicates statistical significance against the baseline.…”
Section: Adversarial Tests For Combating Distinctmentioning
confidence: 99%
See 1 more Smart Citation
“…The trigger bias proposed in our paper belongs to selection bias and model overamplification bias. Bias has also been investigated in natural language inference [1,6,7,13,[21][22][23], question answering [24], ROC story cloze [2,28], lexical inference [17], visual question answering [12], etc. To our best knowledge, we are the first to present the biases in FSEC, i.e., trigger overlapping and trigger separability.…”
Section: Few-shot Event Classificationmentioning
confidence: 99%
“…Such methods can be roughly categorized into two classes: sentence embedding bottleneck methods which first encode the two sentences as vectors and then feed them into a classifier for classification (Conneau et al, 2017;Nie and Bansal, 2017;Choi et al, 2018;Chen et al, 2017b;Wu et al, 2018), and more general methods which usually involve interactions while encoding the two sentences in the pair (Chen et al, 2017a;Gong et al, 2018;Parikh et al, 2016). Recently, NLI models are shown to be biased towards spurious surface patterns in the human annotated datasets (Poliak et al, 2018;Gururangan et al, 2018;Liu et al, 2020a), which makes them vulnerable to adversarial attacks (Glockner et al, 2018;Minervini and Riedel, 2018;McCoy et al, 2019;Liu et al, 2020b).…”
Section: Natural Language Inferencementioning
confidence: 99%