Although the problem of disinformation is on the rise across the globe, previous research has found that countries differ in the extent of widespread disinformation. In this study, we examine the willingness to disseminate disinformation across six countries (Belgium, France, Germany, Switzerland, the U.K. and the U.S.). We use a model by Humprecht, Esser and van Aelst (2020) to study to what degree various systemic-structural factors influence individual behavior and contribute to resilience to disinformation. We draw on uniformly collected primary survey data and use regression analyses to examine which factors may explain citizens' decisions to not further propagate disinformation. The results of our cross-national study show that resilience factors are country-specific and are highly dependent on the respective political and information environments. While in some countries extreme ideology weakens resilience, in others low education can have such an effect. Crossnational resilience factors include heavy social media use, the use of alternative media, and populist party support. We discuss what kind of tailored measures in combating online disinformation are needed to improve social resilience across different countries.
The increasing dissemination of online misinformation in recent years has raised the question which individuals interact with this kind of information and what role attitudinal congruence plays in this context. To answer these questions, we conduct surveys in six countries (BE, CH, DE, FR, UK, and US) and investigate the drivers of the dissemination of misinformation on three noncountry specific topics (immigration, climate change, and COVID-19). Our results show that besides issue attitudes and issue salience, political orientation, personality traits, and heavy social media use increase the willingness to disseminate misinformation online. We conclude that future research should not only consider individual’s beliefs but also focus on specific user groups that are particularly susceptible to misinformation and possibly caught in social media “fringe bubbles.”
Disinformation can appear in various forms. Firstly, different formats can be manipulated, such as texts, images, and videos. Secondly, the amount and degree of falseness can vary, from completely fabricated content to decontextualized information to satire that intentionally misleads recipients. Therefore, the forms and format of disinformation might vary and differ not only between the supposedly clear categories of “true” and “false”. Field of application/theoretical foundation: Studies on types of disinformation are conducted in various fields, e.g. political communication, journalism studies, and media effects studies. Among other things, the studies identify the most common types of mis- or disinformation during certain events (Brennen, Simon, Howard, & Nielsen, 2020), analyze and categorize the behavior of different types of Twitter accounts (Linvill & Warren, 2020), and investigate the existence of serveral types of “junk news” in different national media landscapes (Bradshaw, Howard, Kollanyi, & Neudert, 2020; Neudert, Howard, & Kollanyi, 2019). References/combination with other methods of data collection: Only relatively few studies use combinations of methods. Some studies identify different types of disinformation via qualitative and quantitative content analyses (Bradshaw et al., 2020; Brennen et al., 2020; Linvill & Warren, 2020; Neudert et al., 2019). Others use surveys to analyze respondents’ concerns as well as exposure towards different types of mis- and disinformation (Fletcher, 2018). Example studies: Brennen et al. (2020); Bradshaw et al. (2020); Linvill and Warren (2020) Information on example studies: Types of disinformation are defined by the presentation and contextualization of content and sometimes additionally by details (e.g. professionalism) about the communicator. Studies either deductively identify different types of disinformation (Brennen et al., 2020) by applying the theoretical framework by Wardle (2019), or additionally inductively identify and build different categories based on content analyses (Bradshaw et al., 2020; Linvill & Warren, 2020). Table 1. Types of mis-/disinformation by Brennen et al. (2020) Category Specification Satire or parody - False connection Headlines, visuals or captions don’t support the content Misleading content Misleading use of information to frame an issue or individual, when facts/information are misrepresented or skewed False context Genuine content is shared with false contextual information, e.g. real images which have been taken out of context Imposter content Genuine sources, e.g. news outlets or government agencies, are impersonated Fabricated content Content is made up and 100% false; designed to deceive and do harm Manipulated content Genuine information or imagery is manipulated to deceive, e.g. deepfakes or other kinds of manipulation of audio and/or visuals Note. The categories are adapted from the theoretical framework by Wardle (2019). The coding instruction was: “To the best of your ability, what type of misinformation is it? (Select one that fits best.)” (Brennen et al., 2020, p. 12). The coders reached an intercoder reliability of a Cohen’s kappa of 0.82. Table 2. Criteria for the “junk news” label by Bradshaw et al. (2020) Criteria Reference Specification Professionalism refers to the information about authors and the organization “Sources do not employ the standards and best practices of professional journalism, including information about real authors, editors, and owners” (pp. 174-175). “Distinct from other forms of user-generated content and citizen journalism, junk news domains satisfy the professionalism criterion because they purposefully refrain from providing clear information about real authors, editors, publishers, and owners, and they do not publish corrections of debunked information” (p. 176). Procedure: - Systematically checked the about pages of domains: Contact information, information about ownership and editors, and other information relating to professional standards - Reviewed whether the sources appeared in third-party fact-checking reports - Checked whether sources published corrections of fact-checked reporting. Examples: zerohedge.com, conservative- fighters.org, deepstatenation.news Counterfeit refers to the layout and design of the domain itself “(…) [S]ources mimic established news reporting by using certain fonts, having branding, and employing content strategies. (…) Junk news is stylistically disguised as professional news by the inclusion of references to news agencies and credible sources as well as headlines written in a news tone with date, time, and location stamps. In the most extreme cases, outlets will copy logos and counterfeit entire domains” (p. 176). Procedure: - Systematically reviewed organizational information about the owner and headquarters by checking sources like Wikipedia, the WHOIS database, and third-party fact-checkers (like Politico or MediaBiasFactCheck) - Consulted country-specific expert knowledge of the media landscape in the US to identify counterfeiting websites. Examples: politicoinfo.com, NBC.com.co Style refers to the content of the domain as a whole “ (…) [S]tyle is concerned with the literary devices and language used throughout news reporting. (…) Designed to systematically manipulate users for political purposes, junk news sources deploy propaganda techniques to persuade users at an emotional, rather than cognitive, level and employ techniques that include using emotionally driven language with emotive expressions and symbolism, ad hominem attacks, misleading headlines, exaggeration, excessive capitalization, unsafe generalizations, logical fallacies, moving images and lots of pictures or mobilizing memes, and innuendo (Bernays, 1928; Jowette & O’Donnell, 2012; Taylor, 2003). (…) Stylistically, problematic sources will employ propaganda and clickbait techniques to varying degrees. As a result, determining style can be highly complex and context dependent” (p. 177). Procedure: - Examined at least five stories on the front page of each news source in depth during the US presidential campaign in 2016 and the SOTU address in 2018 - Checked the headlines of the stories and the content of the articles for literary and visual propaganda devices - Considered as stylistically problematic if three of the five stories systematically exhibited elements of propaganda Examples: 100percentfedup.com, barenakedislam.com, theconservativetribune.com, dangerandplay.com Credibility refers to the content of the domain as a whole “(…) [S]ources rely on false information or conspiracy theories and do not post corrections” (p. 175). “[They] typically report on unsubstantiated claims and rely on conspiratorial and dubious sources. (…) Junk news sources that satisfy the credibility criterion frequently fail to vet their sources, do not consult multiple sources, and do not fact-check” (p. 178). Procedure: - Examined at least five front page stories and reviewed the sources that were cited - Reviewed pages to see if they included known conspiracy theories on issues such as climate change, vaccination, and “Pizzagate” - Checked third-party fact-checkers for evidence of debunked stories and conspiracy theories Examples: infowars.com, endingthefed.com, thegatewaypundit.com, newspunch.com Bias refers to the content of the domain as a whole “(…) [H]yper-partisan media websites and blogs (…) are highly biased, ideologically skewed, and publish opinion pieces as news. Basing their stories on the same events, these sources manage to convey strikingly different impressions of what actually transpired. It is such systematic differences in the mapping from facts to news reports that we call bias. (…) Bias exists on both sides of the political spectrum. Like determining style, determining bias can be highly complex and context dependent” (pp. 177-178). Procedure: - Checked third-party sources that systematically evaluate media bias - If the domain was not evaluated by a third party, the authors examined the ideological leaning of the sources used to support stories appearing on the domain - Evaluation of the labeling of politicians (are there differences between the left and the right?) - Identified bias created through the omission of unfavorable facts, or through writing that is falsely presented as being objective Examples on the right: breitbart.com, dailycaller.com, infowars.com, truthfeed.com Examples on the left: occupydemocrats.com, addictinginfo.com, bipartisanreport.com Note. The coders reached an intercoder reliability of a Krippendorff’s kappa of 0.89. The label of “junk news” is defined by fulfilling at least three of the five criteria. It refers to sources that deliberately publish misleading, deceptive, or incorrect information packaged as real news. Table 3. Identified types of IRA-associated Twitter accounts by Linvill and Warren (2020) Category Specification Right troll “Twitter-handles broadcast nativist and right-leaning populist messages. These handles’ themes were distinct from mainstream Republicanism. (…) They rarely broadcast traditionally important Republican themes, such as taxes, abortion, and regulation, but often sent divisive messages about mainstream and moderate Republicans. (…) The overwhelming majority of handles, however, had limited identifying information, with profile pictures typically of attractive, young women” (p. 5). Hashtags frequently used by these accounts: #MAGA (i.e., “Make America Great Again,”), #tcot (i.e. “Top Conservative on Twitter), #AmericaFirst, and #IslamKills Left troll “These handles sent socially liberal messages, with an overwhelming focus on cultural identity. (…) They discussed gender and sexual identity (e.g., #LGBTQ) and religious identity (e.g., #MuslimBan), but primarily focused on racial identity. Just as the Right Troll handles attacked mainstream Republican politicians, Left Troll handles attacked mainstream Democratic politicians, particularly Hillary Clinton. (…) It is worth noting that this account type also included a substantial portion of messages which had no clear political motivation” (p. 6). Hashtags frequently used by these accounts: #BlackLivesMatter, #PoliceBrutality, and #BlackSkinIsNotACrime Newsfeed “These handles overwhelmingly presented themselves as U.S. local news aggregators and had descriptive names (…). These accounts linked to legitimate regional news sources and tweeted about issues of local interest (…). A small number of these handles, (…) tweeted about global issues, often with a pro-Russia perspective” (p. 6). Hashtags frequently used by these accounts: #news, #sports, and #local Hashtag gamer “These handles are dedicated almost entirely to playing hashtag games, a popular word game played on Twitter. Users add a hashtag to a tweet (e.g., #ThingsILearnedFromCartoons) and then answer the implied question. These handles also posted tweets that seemed organizational regarding these games (…). Like some tweets from Left Trolls, it is possible such tweets were employed as a form of camouflage, as a means of accruing followers, or both. Other tweets, however, often using the same hashtag as mundane tweets, were socially divisive (…)” (p. 7). Hashtags frequently used by these accounts: #ToDoListBeforeChristmas, #ThingsYouCantIgnore, #MustBeBanned, and #2016In4Words Fearmonger “These accounts spread disinformation regarding fabricated crisis events, both in the U.S. and abroad. Such events included non-existent outbreaks of Ebola in Atlanta and Salmonella in New York, an explosion at the Columbian Chemicals plan in Louisiana, a phosphorus leak in Idaho, as well as nuclear plant accidents and war crimes perpetrated in Ukraine. (…) These accounts typically tweeted a great deal of innocent, often frivolous content (i.e. song lyrics or lines of poetry) which were potentially automated. With this content these accounts often added popular hashtags such as #love (…) and #rap (…). These accounts changed behavior sporadically to tweet disinformation, and that output was produced using a different Twitter client than the one used to produce the frivolous content. (…) The Fearmonger category was the only category where we observed some inconsistency in account activity. A small number of handles tweeted briefly in a manner consistent with the Right Troll category but switched to tweeting as a Fearmonger or vice-versa” (p. 7). Hashtags frequently used by these accounts: #Fukushima2015 and #ColumbianChemicals Note. The categories were identified qualitatively analyzing the content produced and were then refined and explored more detailed via a quantitative analysis. The coders reached a Krippendorff’s alpha intercoder-reliability of 0.92. References Bradshaw, S., Howard, P. N., Kollanyi, B., & Neudert, L.?M. (2020). Sourcing and automation of political news and information over social media in the United States, 2016-2018. Political Communication, 37(2), 173–193. Brennen, J. S., Simon, F. M., Howard, P. N. [P. N.], & Nielsen, R. K. (2020). Types, sources, and claims of covid-19 misinformation. Reuters Institute. Retrieved from http://www.primaonline.it/wp-content/uploads/2020/04/COVID-19_reuters.pdf Fletcher, R. (2018). Misinformation and disinformation unpacked. Reuters Institute. Retrieved from http://www.digitalnewsreport.org/survey/2018/misinformation-and-disinformation-unpacked/ Linvill, D. L., & Warren, P. L. (2020). Troll factories: Manufacturing specialized disinformation on Twitter. Political Communication, 1–21. Neudert, L.?M., Howard, P., & Kollanyi, B. (2019). Sourcing and automation of political news and information during three European elections. Social Media + Society, 5(3). https://doi.org/10.1177/2056305119863147 Wardle, C. (2019). First Draft's essential guide to understanding information disorder. UK: First Draft News. Retrieved from https://firstdraftnews.org/wp-content/uploads/2019/10/Information_Disorder_Digital_AW.pdf?x76701
Soziale Medien sind aus Wahlkämpfen nicht mehr wegzudenken. Sie bieten politischen Akteuren die Möglichkeit, sich mit ihren eigenen Botschaften direkt an die Wählerschaft zu richten. Durch virale Verbreitung und eine hohe Resonanz können Inhalte zudem an Nutzergruppen gelangen, die außerhalb des Social-Media-Netzwerkes der Akteure liegen. Damit erhöht sich für politische Akteure die Wahrscheinlichkeit, auch eine potenziell neue Wählerschaft auf Social Media zu erreichen. Soziale Netzwerkplattformen wie Facebook quantifizieren die Resonanz über die Reaktionen der Nutzer: Anhand der Anzahl Likes, Shares und Comments (Facebook-Resonanz), die ein Facebook-Beitrag erreicht. Die vorliegende Studie befasst sich mit der Frage, welche Merkmale (Format, Zeitpunkt und Inhalt) Beiträge aufweisen müssen, um besonders viele Nutzerreaktionen hervorzurufen und damit eine möglichst hohe Facebook-Resonanz zu erzeugen. Eine quantitative Inhaltsanalyse von 733 Facebook-Beiträgen der sieben größten im Schweizer Parlament vertretenen Parteien im Zeitraum von drei Monaten vor dem Wahltermin 2015 zeigt, dass vor allem die Verwendung von Nachrichtenfaktoren sowie parteieigener Themen hilfreich ist, um die Facebook-Resonanz zu erhöhen.
Throughout the current global health crisis, false and misleading content has proliferated on social media. Previous research indicates that users of social media primarily share information that contains attentiongrabbing elements. Because sensationalist elements are prevalent in disinformation, this study examines the role of sensationalism in supporting disinformation. We conducted survey experiments in six countries (N = 7,009), presenting versions of a false claim that differed in their degree of sensationalism. We varied three contextual conditions for disinformation support: whether respondents grew up in a tabloid-oriented national news culture, whether they indicated individual usage preferences for tabloid and alternative media, and how they rated their situational uncertainty during the pandemic. Our results show a weak influence of tabloidized cultures, but people who frequently use tabloid or alternative media are more likely to agree with disinformation. Users who are uncertain about what is true and what is false are also more likely to agree with disinformation, especially when it is presented sensationally. The average user, however, is more likely to agree with disinformation that is presented neutrally. This finding is concerning, as disinformation presented in a sober manner is much harder to detect by those who want to fight the "infodemic. "
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.