Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing 2021
DOI: 10.18653/v1/2021.emnlp-main.28
|View full text |Cite
|
Sign up to set email alerts
|

How Does Counterfactually Augmented Data Impact Models for Social Computing Constructs?

Abstract: As NLP models are increasingly deployed in socially situated settings such as online abusive content detection, it is crucial to ensure that these models are robust. One way of improving model robustness is to generate counterfactually augmented data (CAD) for training models that can better learn to distinguish between core features and data artifacts. While models trained on this type of data have shown promising out-of-domain generalizability, it is still unclear what the sources of such improvements are. W… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 39 publications
(38 reference statements)
0
2
0
Order By: Relevance
“…As mentioned above, model bias will be a particular issue when considering data from participants belonging to different communities whereby use of identity terms by one group will systematically alter their results; unlike with lexicon-based approaches, it is not always easy to identify which terms will lead to bias without testing with a data set such as ours. There is a significant body of work looking to develop less biased language models for a range of tasks (Dixon et al, 2018; Liang et al, 2020; Schick et al, 2021; Ungless et al, 2022; Webster et al, 2021; Zhao et al, 2018), for example, using counterfactually augmented data (Sen et al, 2021), which researchers with the right technical skills may be able to adopt, where they have access to the original model. However, for those who must rely on third party tools, our findings suggest marginalised individuals continue to be impacted by predictive bias despite the likely use of debiasing strategies, and in particular the least salient identities.…”
Section: Discussion and Limitationsmentioning
confidence: 99%
“…As mentioned above, model bias will be a particular issue when considering data from participants belonging to different communities whereby use of identity terms by one group will systematically alter their results; unlike with lexicon-based approaches, it is not always easy to identify which terms will lead to bias without testing with a data set such as ours. There is a significant body of work looking to develop less biased language models for a range of tasks (Dixon et al, 2018; Liang et al, 2020; Schick et al, 2021; Ungless et al, 2022; Webster et al, 2021; Zhao et al, 2018), for example, using counterfactually augmented data (Sen et al, 2021), which researchers with the right technical skills may be able to adopt, where they have access to the original model. However, for those who must rely on third party tools, our findings suggest marginalised individuals continue to be impacted by predictive bias despite the likely use of debiasing strategies, and in particular the least salient identities.…”
Section: Discussion and Limitationsmentioning
confidence: 99%
“…Since the model observes the same scenario in the doubled (for binary gender) sentences, it can learn to abstract away from the entities to the context [Emami et al 2019]. This method has shown encouraging results in mitigating bias in contextualised word representations such as ELMo and monolingual BERT [Bartl et al 2020;de Vassimon Manela et al 2021;Sen et al 2021;, and for hate speech detection [Park et al 2018]. Nonetheless, collecting annotated lists for gender-specific pairs can be expensive, and the method essentially doubles the size of the training data.…”
Section: Debiasing Using Data Manipulationmentioning
confidence: 99%