2021
DOI: 10.1002/poi3.265
|View full text |Cite
|
Sign up to set email alerts
|

Gatekeepers of toxicity: Reconceptualizing Twitter's abuse and hate speech policies

Abstract: Twitter has become the platform of choice for journalists, politicians, and citizens because of the ways that it encourages expression, participation, and debate. In recent years, however, Twitter's “zeal for free speech” has turned the platform into a breeding ground for abuse, harassment, and hate speech. Through a discourse analysis of Twitter's policies, rules, and enforcement guidelines, this project asks (1) What role do Twitter's policies, rules, and terms of service play in instances where Twitter does… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 18 publications
(12 citation statements)
references
References 55 publications
0
10
0
Order By: Relevance
“…Second, our findings expand current discussions on content moderation to include the professional world. While existing research on content moderation concentrates on the state, social media companies or users as a general category (Einwiller & Kim, 2020; He, 2020; Konikoff, 2021; Riedl et al, 2021), further research is necessary regarding organizational communication, building on more general insights on sectoral differences in communication practices (see, e.g., Einwiller & Kim, 2020; Liu et al, 2010). Future research may further investigate the reasons for choosing specific counterstrategies regarding online hatespeech.…”
Section: Discussionmentioning
confidence: 99%
“…Second, our findings expand current discussions on content moderation to include the professional world. While existing research on content moderation concentrates on the state, social media companies or users as a general category (Einwiller & Kim, 2020; He, 2020; Konikoff, 2021; Riedl et al, 2021), further research is necessary regarding organizational communication, building on more general insights on sectoral differences in communication practices (see, e.g., Einwiller & Kim, 2020; Liu et al, 2010). Future research may further investigate the reasons for choosing specific counterstrategies regarding online hatespeech.…”
Section: Discussionmentioning
confidence: 99%
“…At the same time, they should also point out the complexity of the issue, and facilitate exposure to alternative viewpoints (Bartlett & Krasodomski-Jones, 2016); they should encourage readers to condemn hateful comments, trigger positive feelings (such as empathy) for victims of discriminatory narratives, and/or trigger some doubt that could lead to a change in attitudes (Gemmerli, 2015;Silverman et al, 2016). While arguments exchanged between strangers may lead to a favourable change in discourse, this is very rare (Bartlett & Krasodomski-Jones, 2016;Benesch et al, 2016;Ernst et al, 2017;Konikoff, 2021;Schieb & Preuss, 2016;Wright et al, 2017). Most research is based on small experiments such as Munger's (2017), which attest to the power of in-group norms and the need to tackle this phenomenon if we want to reduce racism.…”
Section: Counter-speech: Definition and Impactmentioning
confidence: 99%
“…Concerning individual or institutional accountability for online hate speech, we argue that, because most online hate speech is covert, current measures regulating hate speech are insufficient. For example, the 2016 Code of Conduct from the EU Commission falls short in several regards (Konikoff, 2021), including concerns about the qualifications of those deleting hate messages and the fact that artificial intelligence models used to detect hate speech are 1.5 times more likely to flag tweets written by specific communities (Silva et al, 2016). Covert hate speech entails even more problems, as it uses implicit meaning and indirect discursive strategies to express hatred, including derogative metaphors (Musolff, 2015), inferences (Baider, 2022), and humor (Weaver, 2016).…”
Section: Introductionmentioning
confidence: 99%
“…We say lightly moderated because, while comments are screened by site editors before publication, CBC has previously stated that 85% to 90% of comments submitted to CBC.ca are published 5 . This can be contrasted with platforms like Twitter, which engage in far more complex forms of gatekeeping and content moderation (see e.g., Konikoff 2021). The comment section’s innate porosity is, in fact, a benefit to our method, in that the conversational architecture of the comment section allows for varied cultural ideas and references to find expression in the comments, as opposed to a debate forum that closely regulates the parameters of speech.…”
Section: The Case Study: the Cbc Article And Comment Sectionmentioning
confidence: 99%