Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) 2020
DOI: 10.18653/v1/2020.emnlp-main.613
|View full text |Cite
|
Sign up to set email alerts
|

Towards Debiasing NLU Models from Unknown Biases

Abstract: NLU models often exploit biases to achieve high dataset-specific performance without properly learning the intended task. Recently proposed debiasing methods are shown to be effective in mitigating this tendency. However, these methods rely on a major assumption that the types of bias should be known a-priori, which limits their application to many NLU tasks and datasets. In this work, we present the first step to bridge this gap by introducing a self-debiasing framework that prevents models from mainly utiliz… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
19
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
2

Relationship

1
8

Authors

Journals

citations
Cited by 44 publications
(27 citation statements)
references
References 39 publications
0
19
0
Order By: Relevance
“…From a robustness point of view, such pretrain-and-fine-tune pipelines are known to be prone to biases that are present in data (Gururangan et al, 2018;Poliak et al, 2018;Mc-Coy et al, 2019;Schuster et al, 2019). Various methods were proposed to mitigate such biases in a form of robust training, where a bias model is trained to capture the bias and then used to relax the predictions of a main model, so that it can focus less on biased examples and more on the "hard", more challenging examples (Clark et al, 2019;Mahabadi et al, 2020;Utama et al, 2020b; Figure 1: Amount of subsequence bias extracted from different language models vs. the robustness of models to the bias. Robustness is measured as improvement of the model on out-of-distribution examples, while extractability is measured as the improvement of the probe's ability to extract the bias from a debiased model, compared to the baseline.…”
Section: Introductionmentioning
confidence: 99%
“…From a robustness point of view, such pretrain-and-fine-tune pipelines are known to be prone to biases that are present in data (Gururangan et al, 2018;Poliak et al, 2018;Mc-Coy et al, 2019;Schuster et al, 2019). Various methods were proposed to mitigate such biases in a form of robust training, where a bias model is trained to capture the bias and then used to relax the predictions of a main model, so that it can focus less on biased examples and more on the "hard", more challenging examples (Clark et al, 2019;Mahabadi et al, 2020;Utama et al, 2020b; Figure 1: Amount of subsequence bias extracted from different language models vs. the robustness of models to the bias. Robustness is measured as improvement of the model on out-of-distribution examples, while extractability is measured as the improvement of the probe's ability to extract the bias from a debiased model, compared to the baseline.…”
Section: Introductionmentioning
confidence: 99%
“…This suggests that naively applying debiasing techniques may incur unexpected negative impacts on other aspects of the moderation system. Further research is needed into modeling approaches that can achieve robust performance both in prediction and in uncertainty calibration under data bias and distributional shift (Nam et al, 2020;Utama et al, 2020;Du et al, 2021;Yaghoobzadeh et al, 2021;Bao et al, 2021;Karimi Mahabadi et al, 2020).…”
Section: Discussionmentioning
confidence: 99%
“…Several studies have reported successful generalization from MNLI to HANS. Among data-based strategies, it has been achieved via augmenting MNLI data with predicate-argument structures (Moosavi et al, 2020) and syntactic transformations (Min et al, 2020). Although there are many reports of syntactic knowledge in pre-trained BERT (Rogers et al, 2020b), Min et al (2020) suggest that pre-training does not yield a strong inductive bias to use syntax in downstream tasks, and augmentation "nudges" the model towards that.…”
Section: Related Workmentioning
confidence: 99%