Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics 2019
DOI: 10.18653/v1/p19-1357
|View full text |Cite
|
Sign up to set email alerts
|

A Just and Comprehensive Strategy for Using NLP to Address Online Abuse

Abstract: Online abusive behavior affects millions and the NLP community has attempted to mitigate this problem by developing technologies to detect abuse. However, current methods have largely focused on a narrow definition of abuse to detriment of victims who seek both validation and solutions. In this position paper, we argue that the community needs to make three substantive changes: (1) expanding our scope of problems to tackle both more subtle and more serious forms of abuse, (2) developing proactive technologies … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
74
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
3
3
2

Relationship

1
7

Authors

Journals

citations
Cited by 88 publications
(74 citation statements)
references
References 60 publications
0
74
0
Order By: Relevance
“…More boldly, Jurgens et al (2019) call for a paradigm shift in the use of NLP technologies to address abusive language. Authors point out that only some phenomena along the spectrum of abusive content are actually addressed, while others are neglected for being either too subtle or quite rare.…”
Section: Lexical Analysismentioning
confidence: 99%
“…More boldly, Jurgens et al (2019) call for a paradigm shift in the use of NLP technologies to address abusive language. Authors point out that only some phenomena along the spectrum of abusive content are actually addressed, while others are neglected for being either too subtle or quite rare.…”
Section: Lexical Analysismentioning
confidence: 99%
“…Toxicity and offensiveness are not always expressed with toxic language. While a substantial community effort has rightfully focused on identifying, preventing, and mitigating overtly toxic, profane, and hateful language (Schmidt and Wiegand, 2017), offensiveness spans a far larger spectrum that includes comments with more implicit and subtle signals that are no less offensive (Jurgens et al, 2019). One significant class of subtle-but-offensive comments includes microaggressions (Sue et al, 2007, MAS), defined in Merriam-Webster as "a comment or action that Figure 1: Existing state-of-the-art tools for hate speech detection and sentiment analysis cannot identify the veiled offensiveness of microaggressions (MAS) like the one in this real comment, because in many cases, the framing of a MA includes stylistic markers of positive language.…”
Section: Introductionmentioning
confidence: 99%
“…We partition these expressions as either Recommended or Non-Recommended, according to their prescriptive status, by consulting guidelines published by three US-based organizations: Anti-Defamation League, ACM SIGACCESS and the ADA National Network (Cavender et al, 2014;Hanson et al, 2015;League, 2005;Network, 2018). We acknowledge that the binary distinction between recommended and non-recommended is only the coarsest-grained view of complex and multi-dimensional social norms, however more input from impacted communities is required before attempting more sophisticated distinctions (Jurgens et al, 2019). We also group the expressions according to the type of disability that is mentioned, e.g.…”
Section: Linguistic Phrases For Disabilitiesmentioning
confidence: 99%
“…In another deployment context, models for detecting abuse can be used to nudge writers to rethink comments which might be interpreted as toxic (Jurgens et al, 2019). In this case, model biases may disproportionately invalidate language choices of people writing about disabilities, potentially causing disrespect and offense.…”
Section: Implications Of Model Biasesmentioning
confidence: 99%