2022
DOI: 10.1016/j.cose.2021.102577
|View full text |Cite
|
Sign up to set email alerts
|

Misinformation warnings: Twitter’s soft moderation effects on COVID-19 vaccine belief echoes

Abstract: Twitter, prompted by the rapid spread of alternative narratives, started actively warning users about the spread of COVID-19 misinformation. This form of soft moderation comes in two forms: as an interstitial cover before the Tweet is displayed to the user or as a contextual tag displayed below the Tweet. We conducted a 319-participants study with both verified and misleading Tweets covered or tagged with the COVID-19 misinformation warnings to investigate how Twitter users perceive the accuracy of COVID-19 va… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
30
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
1

Relationship

2
6

Authors

Journals

citations
Cited by 55 publications
(36 citation statements)
references
References 32 publications
(51 reference statements)
0
30
0
Order By: Relevance
“…Due to the negative potential influence on people’s health practices, health misinformation has received more scholarly attention, especially since the beginning of the COVID-19 pandemic [ 80 – 82 ]. It is particularly harmful because: 1) people are more likely to trust the information after they have been exposed to it, 2) correcting misinformation is time-consuming and resource-intensive, and 3) even after correction, it may continue to influence attitudes and behaviours, reflecting a phenomenon known as “belief echoes” [ 83 , 84 ]. Correcting disinformation has become more complex and difficult as social media platforms have grown in popularity, catalysing the quick and widespread spread of misinformation.…”
Section: Discussionmentioning
confidence: 99%
“…Due to the negative potential influence on people’s health practices, health misinformation has received more scholarly attention, especially since the beginning of the COVID-19 pandemic [ 80 – 82 ]. It is particularly harmful because: 1) people are more likely to trust the information after they have been exposed to it, 2) correcting misinformation is time-consuming and resource-intensive, and 3) even after correction, it may continue to influence attitudes and behaviours, reflecting a phenomenon known as “belief echoes” [ 83 , 84 ]. Correcting disinformation has become more complex and difficult as social media platforms have grown in popularity, catalysing the quick and widespread spread of misinformation.…”
Section: Discussionmentioning
confidence: 99%
“…Evidence suggest that only the interstitial covers, but not the trustworthiness tags, make the users heed the warnings of misinformation [64,65,69,84]. It is tempting to simply discard the trustworthiness tags and only use interstitial covers, however.…”
Section: Achtung! Misinformationmentioning
confidence: 99%
“…What actually is a bit difficult to understand is why, despite these advancements in usable security, warnings about misinformation on social media have made little progress in fostering desirable security behavior [69]. One could argue that the nature of the security hazard differs between the two settings -traditional programmatic security is far more complex to grasp than picking up on a causal post that links the COVID-19 vaccines with infertility -and that makes designing misinformation warnings an entirely different challenge.…”
Section: Introductionmentioning
confidence: 99%
“…Researchers in a preliminary study found that some means of soft moderation on social media sites like Twitter are more promising than others, including warnings which fully obscure content like Twitter's "Read First" warning [45]. Levying soft moderation techniques like these may encourage users to read the full context of news content, as well as serve as a reminder of the dangers of out-of-context content in perpetuating the spread of alt-narratives online.…”
Section: Guarding Against Adversarial Language Modelingmentioning
confidence: 99%
“…Another option would be to automate even more of the process. While an adversary might extend a couple of modules that we built for the Out-of-Context Summarizer summaries to post on social media platforms without any human intervention, with ethical considerations, we could develop automatically-generated simulated tweets or parleys for experimentation to gauge user reactions in a lab setting, similar to user studies investigating the effects of soft moderation on Twitter [45].…”
Section: Future Enhancementsmentioning
confidence: 99%