2021
DOI: 10.1145/3479512
|View full text |Cite
|
Sign up to set email alerts
|

A Framework of Severity for Harmful Content Online

Abstract: The proliferation of harmful content on online social media platforms has necessitated empirical understandings of experiences of harm online and the development of practices for harm mitigation. Both understandings of harm and approaches to mitigating that harm, often through content moderation, have implicitly embedded frameworks of prioritization-what forms of harm should be researched, how policy on harmful content should be implemented, and how harmful content should be moderated. To aid efforts of better… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
20
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 38 publications
(20 citation statements)
references
References 61 publications
(56 reference statements)
0
20
0
Order By: Relevance
“…Although typologizing the kinds of harms that can occur is necessary to moderate it (Banko et al, 2020; Banks, 2010), what platforms often fail to do is meaningfully repair what leads to harmful behavior and content in the first place. Moreover, platforms also attempt to assign point values or quantitatively make decisions on what is or is not harmful content, which also can lead to overlooking or outright allowing certain harms to persist if they do not meet a certain numeric threshold, which perpetuates the notion that certain kinds of harms are prioritized over others (Scheuerman et al, 2021). Despite platforms implementing these policies and continuing to update what constitutes hate speech and harassment and their attempts to moderate this kind of content, inconsistencies in these definitions abound within platforms as well as across them (Pater et al, 2016).…”
Section: Literature Reviewmentioning
confidence: 99%
“…Although typologizing the kinds of harms that can occur is necessary to moderate it (Banko et al, 2020; Banks, 2010), what platforms often fail to do is meaningfully repair what leads to harmful behavior and content in the first place. Moreover, platforms also attempt to assign point values or quantitatively make decisions on what is or is not harmful content, which also can lead to overlooking or outright allowing certain harms to persist if they do not meet a certain numeric threshold, which perpetuates the notion that certain kinds of harms are prioritized over others (Scheuerman et al, 2021). Despite platforms implementing these policies and continuing to update what constitutes hate speech and harassment and their attempts to moderate this kind of content, inconsistencies in these definitions abound within platforms as well as across them (Pater et al, 2016).…”
Section: Literature Reviewmentioning
confidence: 99%
“…In cases of extreme intracommunity conflict, such tensions can even lead to fragmentation as some members exit [26] and form new, alternative communities [23,64]. Additional evidence for intracommunity tension can be found in work that examines how members define rulebreaking and how to fairly punish such behavior; this work has found substantial disagreement amongst community members [43,70].…”
Section: Intracommunity Tension and Conflictmentioning
confidence: 99%
“…Even when values are shared by all community members, there are likely to be differences in their perceived importance across members. Some work studying perceptions of harmful behavior already provide evidence for differences in opinions on values within a community [43,70]. In the context of peer production communities such as Wikipedia, some research has already explored some sources of internal disagreement, such as tensions between senior and junior members [34,35,80], however the extent to which these findings generalize to social media platforms such as reddit may be limited.…”
Section: Community Governance and (Lack Of) Consensusmentioning
confidence: 99%
See 1 more Smart Citation
“…Online harm occurs in a variety of ways and generally takes two related forms: individually targeted harassment (Vogels, 2021) and harmful content that may be targeted at a group or at no one in particular (Scheuerman et al, 2021). The lack of context sensitivity in the responses to all forms of harm appears to be a widespread pattern among social platforms.…”
mentioning
confidence: 99%