2021
DOI: 10.1371/journal.pone.0256762
|View full text |Cite
|
Sign up to set email alerts
|

Understanding international perceptions of the severity of harmful content online

Abstract: Online social media platforms constantly struggle with harmful content such as misinformation and violence, but how to effectively moderate and prioritize such content for billions of global users with different backgrounds and values presents a challenge. Through an international survey with 1,696 internet users across 8 different countries across the world, this empirical study examines how international users perceive harmful content online and the similarities and differences in their perceptions. We found… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
25
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 32 publications
(25 citation statements)
references
References 26 publications
0
25
0
Order By: Relevance
“…Therefore, in the security warning domain, various designs have been applied depending on the situations (i.e., website browsing and file transmission) (Akhawe & Felt, 2013). Previous studies on social media misinformation warnings reported similar findings that users may have different opinions in the severity of the harmful contents online and urge customized moderation (Jiang, Scheuerman, Fiesler, & Brubaker, 2021). It suggested that users may not want to encounter the same warning design frequently or they may intentionally ignore the message depending on their interest in or familiarity with topics.…”
Section: Introductionmentioning
confidence: 91%
“…Therefore, in the security warning domain, various designs have been applied depending on the situations (i.e., website browsing and file transmission) (Akhawe & Felt, 2013). Previous studies on social media misinformation warnings reported similar findings that users may have different opinions in the severity of the harmful contents online and urge customized moderation (Jiang, Scheuerman, Fiesler, & Brubaker, 2021). It suggested that users may not want to encounter the same warning design frequently or they may intentionally ignore the message depending on their interest in or familiarity with topics.…”
Section: Introductionmentioning
confidence: 91%
“…To see how this might be true, we can examine annotator disagreement rates in today's datasets: for instance, in a toxicity task, over one third of annotators on average disagree with any toxic classification, even after accounting for label noise [37]; in a misinformation classification task, three professional fact checkers were unanimous on only half of URLs [5]. Across countries, content that was perceived as more or less harmful varies significantly [43]. Such disagreement indicates that there may be multiple competing voices, potentially representing different groups of people or sets of values.…”
Section: Disagreement Datasets and Machine Learningmentioning
confidence: 99%
“…Online harm and abuse are highly subjective, situated concepts [102] whose interpretations vary across cultures [60] and even across individuals [56]. Online platforms do not explicitly define these concepts in their policies and take an ad hoc approach to setting standards for addressing them [61,89].…”
Section: Online Harm and Safety In Hcimentioning
confidence: 99%