Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Langua 2022
DOI: 10.18653/v1/2022.naacl-main.265
|View full text |Cite
|
Sign up to set email alerts
|

How Conservative are Language Models? Adapting to the Introduction of Gender-Neutral Pronouns

Abstract: Gender-neutral pronouns have recently been introduced in many languages to a) include non-binary people and b) as a generic singular. Recent results from psycholinguistics suggest that gender-neutral pronouns (in Swedish) are not associated with human processing difficulties. This, we show, is in sharp contrast with automated processing. We show that gender-neutral pronouns in Danish, English, and Swedish are associated with higher perplexity, more dispersed attention patterns, and worse downstream performance… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
8
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(8 citation statements)
references
References 25 publications
(31 reference statements)
0
8
0
Order By: Relevance
“…Sun et al (2021) and Vanmassenhove et al (2021) present rule-based and neural rewriting approaches to generate gender-neutral alternatives in English texts. Brandl et al (2022) find that upstream perplexity substantially increases and downstream task performance severely drops for some tasks when genderneutral language is used in English, Danish and Swedish. Amend et al (2021) show that the substitution of gendered for gender-neutral terms on image captioning models poses a viable approach for reducing gender bias.…”
Section: Bias In Vision and Languagementioning
confidence: 85%
“…Sun et al (2021) and Vanmassenhove et al (2021) present rule-based and neural rewriting approaches to generate gender-neutral alternatives in English texts. Brandl et al (2022) find that upstream perplexity substantially increases and downstream task performance severely drops for some tasks when genderneutral language is used in English, Danish and Swedish. Amend et al (2021) show that the substitution of gendered for gender-neutral terms on image captioning models poses a viable approach for reducing gender bias.…”
Section: Bias In Vision and Languagementioning
confidence: 85%
“…Some works have recently expressed the cruciality of dealing with non-binary identities in Natural Language Processing (NLP) (Cao and Daumé III, 2020;Dev et al, 2021), and two main approaches can be found among the works that have taken this route. Brandl, Cui and Søgaard (2022) focused on neopronouns, showing that language models have difficulties in processing them in Swedish (hen), Danish (de/høn) and English (they/xe). Others focused on standard neutral solutions, for both text classification (Attanasio et al, 2021) and natural language generation tasks, such as genderneutral rewriting (Sun et al, 2021;Vanmassenhove et al, 2021;Attanasio et al, 2021), which consists in converting gendered forms into their gender-neutral counterparts (e.g., En.…”
Section: Gender (Bias) and Machine Translationmentioning
confidence: 99%
“…Existing work on noncisgender identities and machine learning is sparse (e.g., Dev et al, 2021;Cao and Daumé III, 2020;Lauscher et al, 2022). However recently, there have been a couple of works dealing with genderneutral pronouns (e.g., Brandl et al, 2022;Qian et al, 2022). As such, work by Lauscher et al (2022) explores the diversity of gender pronouns and presents five desiderata for how language models should handle (gender-neutral) pronouns.…”
Section: Related Workmentioning
confidence: 99%
“…In a similar vein, we explore potential solutions for how text-to-image models should handle non-cisgender identities. Brandl et al (2022) investigate the effect of gender-neutral pronouns on language models and demonstrate drops in performance in natural language inference. As a potential solution, Qian et al (2022) propose a perturber model for augmenting data sets which they train on texts that have been rewritten in a gender-neutral way.…”
Section: Related Workmentioning
confidence: 99%