Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing 2018
DOI: 10.18653/v1/d18-1521
|View full text |Cite
|
Sign up to set email alerts
|

Learning Gender-Neutral Word Embeddings

Abstract: Word embedding models have become a fundamental component in a wide range of Natural Language Processing (NLP) applications. However, embeddings trained on human-generated corpora have been demonstrated to inherit strong gender stereotypes that reflect social constructs. To address this concern, in this paper, we propose a novel training procedure for learning gender-neutral word embeddings. Our approach aims to preserve gender information in certain dimensions of word vectors while compelling other dimensions… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
319
0
1

Year Published

2019
2019
2021
2021

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 270 publications
(322 citation statements)
references
References 32 publications
2
319
0
1
Order By: Relevance
“…Our work is closely related to the line of work on removing bias in data representations. Bolukbasi et al (2016); Zhao et al (2018b) learn gender-neutral word embeddings by forcing certain dimensions to be free of gender information. Similarly, construct a biased classifier and project its representation out of the model's representation.…”
Section: Related Work and Discussionmentioning
confidence: 99%
“…Our work is closely related to the line of work on removing bias in data representations. Bolukbasi et al (2016); Zhao et al (2018b) learn gender-neutral word embeddings by forcing certain dimensions to be free of gender information. Similarly, construct a biased classifier and project its representation out of the model's representation.…”
Section: Related Work and Discussionmentioning
confidence: 99%
“…For coreference resolution, Rudinger et al (2018) and Zhao et al (2018b) independently designed GBETs based on Winograd Schemas. The corpus consists of sentences which contain a gender-neutral occupation (e.g., doctor), a secondary participant (e.g., patient), and a gendered pronoun that refers either the occupation or the participant.…”
Section: Taskmentioning
confidence: 99%
“…Ultimately, word embeddings with reduced bias performed just as well as unaltered embeddings on coherence and analogy-solving tasks (Bolukbasi et al, 2016). Zhao et al (2018b) propose a new method called GN-GloVe that does not use a classifier to create a set of gender-specific words. The authors train the word embeddings by isolating gender information in specific dimensions and maintaining gender-neutral information in the other dimensions.…”
Section: Removing Gender Subspace In Word Embeddingsmentioning
confidence: 99%
“…Caliskan et al (2017) formulate a method to test biases (including gender stereotypes) in word embeddings, while Rudinger et al (2018) investigate such stereotypes in the context of coreference resolution. There have also been efforts to debias word embeddings (Bolukbasi et al, 2016) and come up with gender neutral word embeddings (Zhao et al, 2018). These efforts, however, have attracted criticism suggesting that they do not actually debias embeddings but instead redistribute the bias across the embedding landscape (Gonen and Goldberg, 2019).…”
Section: Prior Workmentioning
confidence: 99%