Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conferen 2019
DOI: 10.18653/v1/d19-1635
|View full text |Cite
|
Sign up to set email alerts
|

Exploring Human Gender Stereotypes with Word Association Test

Abstract: Word embeddings have been widely used to study gender stereotypes in texts. One key problem regarding existing bias scores is to evaluate their validities: do they really reflect true bias levels? For a small set of words (e.g. occupations), we can rely on human annotations or external data. However, for most words, evaluating the correctness of them is still an open problem. In this work, we utilize word association test, which contains rich types of word connections annotated by human participants, to explor… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
18
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 22 publications
(20 citation statements)
references
References 17 publications
0
18
0
Order By: Relevance
“…Gender affects myriad aspects of NLP, including corpora, tasks, algorithms, and systems Costa-jussà, 2019;Sun et al, 2019). For example, statistical gender biases are rampant in word embeddings (Jurgens et al, 2012;Bolukbasi et al, 2016;Caliskan et al, 2017;Garg et al, 2018;Zhao et al, 2018b;Basta et al, 2019;Chaloner and Maldonado, 2019;Du et al, 2019;Ethayarajh et al, 2019;Kaneko and Bollegala, 2019;Kurita et al, 2019;-including multilingual ones (Escudé Font and Costa-jussà, 2019;Zhou et al, 2019)-and affect a wide range of downstream tasks including coreference resolution (Zhao et al, 2018a;Cao and Daumé III, 2020;Emami et al, 2019), part-ofspeech and dependency parsing (Garimella et al, 2019), language modeling (Qian et al, 2019;Nangia et al, 2020), appropriate turn-taking classification (Lepp, 2019), relation extraction (Gaut et al, 2020), identification of offensive content (Sharifirad and Matwin, 2019;, and machine translation (Stanovsky et al, 2019;Hovy et al, 2020).…”
Section: Related Workmentioning
confidence: 99%
“…Gender affects myriad aspects of NLP, including corpora, tasks, algorithms, and systems Costa-jussà, 2019;Sun et al, 2019). For example, statistical gender biases are rampant in word embeddings (Jurgens et al, 2012;Bolukbasi et al, 2016;Caliskan et al, 2017;Garg et al, 2018;Zhao et al, 2018b;Basta et al, 2019;Chaloner and Maldonado, 2019;Du et al, 2019;Ethayarajh et al, 2019;Kaneko and Bollegala, 2019;Kurita et al, 2019;-including multilingual ones (Escudé Font and Costa-jussà, 2019;Zhou et al, 2019)-and affect a wide range of downstream tasks including coreference resolution (Zhao et al, 2018a;Cao and Daumé III, 2020;Emami et al, 2019), part-ofspeech and dependency parsing (Garimella et al, 2019), language modeling (Qian et al, 2019;Nangia et al, 2020), appropriate turn-taking classification (Lepp, 2019), relation extraction (Gaut et al, 2020), identification of offensive content (Sharifirad and Matwin, 2019;, and machine translation (Stanovsky et al, 2019;Hovy et al, 2020).…”
Section: Related Workmentioning
confidence: 99%
“…WAT: Word Association Test (WAT) is a method to measure gender bias over a large set of words (Du et al, 2019). It calculates the gender information vector for each word in a word association graph created with Small World of Words project (SWOWEN;Deyne et al, 2019) by propagating information related to masculine and feminine words (w i m , w i f ) ∈ L using a random walk approach (Zhou et al, 2003).…”
Section: Weat: Wordmentioning
confidence: 99%
“…We evaluate the proposed method using four standard benchmark datasets for evaluating the biases in word embeddings: Word Embedding Association Test (WEAT; Caliskan et al, 2017), Word Association Test (WAT; Du et al, 2019), Sem-Bias (Zhao et al, 2018b) and WinoBias (Zhao et al, 2018a). Our experimental results show that the proposed debiasing method accurately removes unfair biases from three widely used pre-trained embeddings: Word2Vec (Mikolov et al, 2013b), GloVe (Pennington et al, 2014) and fastText (Bojanowski et al, 2017).…”
Section: Introductionmentioning
confidence: 97%
“…Recently, the NLP community has focused on exploring gender bias in NLP systems (Sun et al, 2019), uncovering many gender disparities and harmful biases in algorithms and text (Cao and Chang and McKeown 2019;Costa-jussà 2019;Du et al 2019;Emami et al 2019;Garimella et al 2019;Gaut et al 2020;Habash et al 2019;Hashempour 2019;Hoyle et al 2019;Lee et al 2019a;Lepp 2019;Qian 2019;Sharifirad and Matwin 2019;Stanovsky et al 2019;O'Neil 2016;Blodgett et al 2020;Nangia et al 2020). Particular attention has been paid to uncovering, analyzing, and removing gender biases in word embeddings (Basta et al, 2019;Kaneko and Bollegala, 2019;Zhao et al, , 2018bBolukbasi et al, 2016).…”
Section: Related Workmentioning
confidence: 99%