Proceedings of the Third Workshop on Abusive Language Online 2019
DOI: 10.18653/v1/w19-3504
|View full text |Cite
|
Sign up to set email alerts
|

Racial Bias in Hate Speech and Abusive Language Detection Datasets

Abstract: Technologies for abusive language detection are being developed and applied with little consideration of their potential biases. We examine racial bias in five different sets of Twitter data annotated for hate speech and abusive language. We train classifiers on these datasets and compare the predictions of these classifiers on tweets written in African-American English with those written in Standard American English. The results show evidence of systematic racial bias in all datasets, as classifiers trained o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

5
261
1
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
3
3
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 319 publications
(299 citation statements)
references
References 27 publications
5
261
1
1
Order By: Relevance
“…Furthermore, researchers have recently focused on the bias derived from the hate speech training datasets [2,21,24]. Davidson et al [2] showed that there were systematic and substantial racial biases in five benchmark Twitter datasets annotated for offensive language detection. Wiegand et al [24] also found that classifiers trained on datasets containing more implicit abuse (tweets with some abusive words) are more affected by biases rather than once trained on datasets with a high proportion of explicit abuse samples (tweets containing sarcasm, jokes, etc.).…”
Section: Previous Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Furthermore, researchers have recently focused on the bias derived from the hate speech training datasets [2,21,24]. Davidson et al [2] showed that there were systematic and substantial racial biases in five benchmark Twitter datasets annotated for offensive language detection. Wiegand et al [24] also found that classifiers trained on datasets containing more implicit abuse (tweets with some abusive words) are more affected by biases rather than once trained on datasets with a high proportion of explicit abuse samples (tweets containing sarcasm, jokes, etc.).…”
Section: Previous Workmentioning
confidence: 99%
“…Hate speech is commonly defined as any communication criticizing a person or a group based on some characteristics such as gender, sexual orientation, nationality, religion, race, etc. Hate speech detection is not a stable or simple target because misclassification of regular conversation as hate speech can severely affect users freedom of expression and reputation, while misclassification of hateful conversations as unproblematic would maintain the status of online communities as unsafe environments [2].…”
Section: Introductionmentioning
confidence: 99%
“…Membership Query Synthesis might also be an interesting approach for tasks where the automatic extraction of large amounts of unlabelled data is not straight-forward. One example that comes to mind is the detection of offensive language or 'hate speech', where we have to deal with highly unbalanced training sets with only a small number of positive instances, and attempts to increase this number have been shown to result in systematically biased datasets (Davidson et al, 2019;Wiegand et al, 2019). Table 2 suggests that the generator produces instances with a more balanced class ratio (1.7 and 1.2) than the pool data (2.6) it was trained on.…”
Section: Discussionmentioning
confidence: 99%
“…2 Our experiments show improvement over their results, as shown in "Experimental design and evaluation" section. Other research articles providing source code for hate detection model development and/or evaluation with links to code implementations that we could locate from our literature review include (implementations in footnotes) Waseem and Hovy [65], 3 Davidson et al [66], 4 ElSherief et al [67], 5 Saha et al [68], 6 Qian et al [69], 7 Ross et al [70], 8 de Gibert et al…”
Section: Research Gapsmentioning
confidence: 99%