Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Confer 2021
DOI: 10.18653/v1/2021.acl-long.247
|View full text |Cite
|
Sign up to set email alerts
|

Annotating Online Misogyny

Abstract: Online misogyny, a category of online abusive language, has serious and harmful social consequences. Automatic detection of misogynistic language online, while imperative, poses complicated challenges to both data gathering, data annotation, and bias mitigation, as this type of data is linguistically complex and diverse. This paper makes three contributions in this area: Firstly, we describe the detailed design of our iterative annotation process and codebook. Secondly, we present a comprehensive taxonomy of l… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
27
0
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
4
1

Relationship

1
9

Authors

Journals

citations
Cited by 19 publications
(29 citation statements)
references
References 54 publications
1
27
0
1
Order By: Relevance
“…Yet other works have grouped annotators into pools by forming clusters of annotators who rate similarly [2,3,6] or by using community-detection algorithms on a graph of annotators [39]. Other have incorporated a diversity of annotators in their rater pools but not explicitly grouped annotators according to identity and drawn conclusions based on identity groups specifically [44].…”
Section: Related Workmentioning
confidence: 99%
“…Yet other works have grouped annotators into pools by forming clusters of annotators who rate similarly [2,3,6] or by using community-detection algorithms on a graph of annotators [39]. Other have incorporated a diversity of annotators in their rater pools but not explicitly grouped annotators according to identity and drawn conclusions based on identity groups specifically [44].…”
Section: Related Workmentioning
confidence: 99%
“…Another paper published at ACL 2021 studying online misogyny (Zeinert et al, 2021) prompted: large amounts of online abuse and doxxing directed at the authors by name; frivolous freedom of information requests explicitly for the purpose of wasting time; complaints made to the authors' external funding organisations; public attacks from politicians against the authors and their institution; and pejorative opinion articles in the national press against the research. Researchers publishing research about hate speech, misinformation, bias or other forms of textual harms should be aware of and prepared for these kinds of interactions, even though this is far from the norm for most areas of academic research.…”
Section: Preparing For Releasing Textual Harm Researchmentioning
confidence: 99%
“…Annotator errors can be found using noise identification techniques (e.g., Hovy et al, 2013;Zhang et al, 2017;Paun et al, 2018;Northcutt et al, 2021), corrected by expert annotators (Vidgen and Derczynski, 2020;Vidgen et al, 2021a) or their impact mitigated by label aggregation. Guidelines which are unclear or incomplete need to be improved by dataset creators, which may require iterative approaches to annotation (Founta et al, 2018;Zeinert et al, 2021). Therefore, quality assurance under the prescriptive paradigm is a laborious but structured process, with inter-annotator agreement as a useful, albeit noisy, measure of dataset quality.…”
Section: Key Benefitsmentioning
confidence: 99%