This article presents a cross-platform analysis of the QAnon conspiracy theory that was popularized online from 2017 onward. It theorizes its diffusion as one of normiefication: a term drawing from Web vernacular indicating how ideas and objects travel from fringe online subcultures to large audiences on mainstream platforms and news outlets. It finds that QAnon had a clear incubation period on 4chan/pol/ after which it quickly migrated to larger platforms, notably YouTube and Reddit. News media started covering the online phenomenon only when it moved off-line, which in turn briefly amplified engagement on the other platforms. Through these data-driven insights, we aim to demonstrate how this cross-platform approach can be replicated and thus help make sense of the complexity of contemporary media ecologies and their role in the diffusion of conspiracy theories as well as other forms of mis- and disinformation.
Social media research software has come to play increasingly important roles in processes of knowledge production. While epistemological, logistical, legal, and ethical concerns put the spotlight on the software tools researchers are relying on, little attention is paid to the role of the ‘toolmaker’ beyond a vague idea of the ‘power’ wielded by those who design, develop, and maintain these technical artifacts. This paper seeks to address this role, both conceptually and with attention to practical concerns, as a form of hybrid and relational authorship. We thereby shift the focus from tool to tool-making, from artifact to practice, in an attempt to produce a different kind of ‘unblackboxing’ of tools than the somewhat overused tropes of open source code or open data. Our contribution proceeds in three steps. We first address tools and tool-making from a theoretical perspective, suggesting that their epistemological orientation reaches more deeply into the networks of research practice than words like ‘bias’ admit and proposing to consider the specific kind of hybrid authorship that emerges in this context. Calling on our own experiences as toolmakers, we then reflect on a cluster of issues where this authorial function becomes particularly visible. Here, we examine how motivations and commitments orient what a piece of software does and how it does it and discuss tool-making from the perspectives of co-development, maintenance and care, and ethics by design. We conclude by arguing that the most pressing concerns for tool-making lie in institutional arrangements that are crucial for the life of research software.
In this paper we develop an empirical, big data approach to analyze how alt-right vernacular concepts (such as kek and beta) were used on the notorious anonymous and ephemeral imageboard 4chan/pol/and the fan wiki Encyclopedia Dramatica. While 4chan/pol/is broadly regarded as an influential source of many of the web’s most successful memes such as Pepe the Frog, Encyclopedia Dramatica functions as a kind of satirical Wikipedia for this meme subculture, written in high concept and highly offensive vernacular style. While the site’s affordances make them distinct, they are connected by a subcultural style and politics that has recently become increasingly connected with violent right-wing activism, forming a loose subcultural language community. Contrary to “memetic” theories of cultural evolution in media studies, our analysis draws on theoretical frameworks from poststructuralist and pragmatist philosophies of language and deploys empirical techniques from corpus linguistics to consider the role of online platforms in shaping these vernacular modes of expression. This approach helps us to identify instances of vernacular innovation within these corpora from 2012-2020—a period during which the white supremacist “alt-right” movement arose online. Through these analyses we contribute both to ongoing interdisciplinary attempts to bridge the gap between cultural-theoretical and computational-linguistic approaches to studying online subcultures, and to the empirical study of the vernacular roots of the “toxic memes” that appear to be an increasingly common feature on social media.
Over the second half of the 2010s, the /pol/ (‘politically incorrect’) forum on the 4chan image board has emerged as a space within which various extreme political ideologies are discussed and cultivated, occasionally informing off-site acts of political extremism. While previous research has often studied this space as a unified whole, it is relevant to more specifically demarcate different publics within 4chan’s /pol/ board, apart from studying it as an ‘amorphous blob’. This paper focuses specifically on ‘generals’ - recurring threads with a specific thematic focus identified by a particular vernacular phrase or tag. By identifying them it is possible to partition the board’s archive into multiple distinct datasets comprising discussions about a particular topic, such as Donald Trump, the Syria war, or British politics. We provide a dataset containing 58,841 opening posts and 13,697,738 replies to those, divided over 329 thematically distinct general thread collections. In this paper we outline our data collection and query protocol, the structure of the data and its rationale, as well as a number of suggested research uses for this new data.
Introduction The study of online conspiracy theory communities presents unique methodological challenges. Online conspiracy theorists often adhere to an individualistic knowledge culture of “doing one’s own research” (Fenster 158). This results in a decentralised landscape of theories, narratives and communities that challenges conventional top-down approaches to analysis. Moreover, conspiracy theories tend to be discussed on the fringes of the online ecosystem, in chat groups, small subcultural Web forums and away from mainstream social media platforms such as Facebook and Twitter (Frenkel; see also De Zeeuw et al.). In this context, the messaging app Telegram has developed into a particularly prominent space (Rogers, “Deplatforming”; Urman and Katz). On the one hand, this platform is not quite part of the same mainstream as Facebook or Twitter, owing in part to its emphasis on security, “social privacy”, and lack of central moderation (Rogers, “Deplatforming”). But it is also not quite an “alternative social medium” (Gehl), as it does not position itself in opposition to mainstream platforms per se, nor does its business model centered around investor funding and advertisements present a break from the “dominant political economy” (ibid.). This ambiguous position might account for Telegram’s wide adoption, as well as its status as a relatively safe haven for communities deplatformed elsewhere – including a lively ecosystem of conspiracy theory communities (La Morgia et al.). Because Telegram communities are distributed over a wide range of channels and chat groups, they cannot always be investigated using existing analytical approaches for social media research. Confronting this challenge, we propose and discuss a method for studying Telegram communities that repurposes the “methods of the medium” (Rogers, Digital Methods). Specifically, our method appropriates Telegram’s feature of forwarding messages from one group to another to discover interlinked distributed communities, collect data from these communities for close reading, and map their information sharing practices. In this article, we will first present this approach and illustrate the types of analyses the collected data might afford in relation to a brief case study on Dutch-speaking conspiracy theories. In this short illustration, we map the convergence of right-wing and conspiratorial communities, both structurally and discursively. As Vieten discusses, “digital pandemic populism during lockdown might have pushed further the mobilisation of the far right, also on the streets”. In the Dutch context there has been a demonstrated connection between the two. Because of this connection, we were drawn to the questions of what these entanglements might look like in a relatively unmoderated Telegram environment. We then proceed to discuss some strengths and limitations, identify avenues for future research, and conclude with some ethical, methodological and epistemological reflections. Overview of Method Our method first combines expert knowledge, and the affordances of the Telegram app’s ‘search’ function to retrieve a set of channels mentioning specific politicians or political parties, as well as other marked terms that might point towards far-right or conspiratorial content. This includes wakker (awakened), variations of batavier and geus (nationalist demonyms), names of known far right politicians (such as The Netherlands’ Thierry Baudet and Flanders’ Dries Van Langenhove) and conspiracy theory activists, and volk (a term meaning roughly “our people”). As this approach precludes discovery of related groups that do not match the queries exactly, this initial curated list is then supplemented with channels advertised elsewhere, such as those featured on the Websites of far-right politicians and organisations, as well as channels covered in mainstream news media. This yields an initial expert list of channels, in our sample case of Dutch-speaking right-wing and conspiracist actors comprising 50 items. One might stop here, and collect data for this manually curated list of groups, as in Nikkhah et al.’s study of Telegram use among the Iranian diaspora in the United States, or Davey and Weinberg’s analysis of far-right groups used in the US military. But this would exclude any groups not known by the researchers; and groups are not always easy to naively discover on Telegram. Because of this, in a subsequent step, we expanded the initial set of relevant Telegram channels by crawling posts in these channels that were forwarded from other channels, constituting links between these channels. We used a custom crawler based on the open source library Selenium, which allows one to control a browser programmatically. The browser was then made to scroll through the Web-based view of the selected channels (e.g. https://t.me/s/durov). In principle, all messages ever posted in a channel are available in this view. We then follow those links, and store the names of the linked channels. Overall, this method thus presumes that if a channel forwards a message from another channel there will be some overlap in terms of topic of discussion between both, making the newly discovered channel similarly relevant to the analysis. This results in a network-like representation of connections between channels. In the context of our case study, this process expands our initial seed list to a list of over 215 relevant public channels, after discarding groups that are not germane to the case study, i.e. those that were not related to far-right or conspiracy theory-related communities. To verify this, channels were inductively coded by a team of four researchers after capture. We then repeat the data collection for this new list of channels, retrieving forwarded messages from over 370,000 total messages spanning the period 2017-2021. This dataset then serves as a starting point for structural analysis of the wider context of the community, aspects of which will be illustrated in the next section. Illustration Emphasising the value of a “quali-quanti” (Venturini and Latour) approach we offer a tentative analysis of the decentralised Dutch-speaking conspiracist narratives and communities on Telegram, and in a broader sense observe what such a distributed community may look like on the platform. This then suggests the various affordances which a dataset collected with this method can offer. Fig. 1: Network visualisation of collected channels (depth: 1) including channels forwarded from (4354 nodes). Nodes are sized and coloured by degree (amount of connections to other channels). A first observation that can be made concerns the topology of the network of channels we found (see fig. 1). A network analysis is a suitable distant reading approach for this kind of data, because it “maps and measures formal and informal relationships … viz., who knows whom, and who shares what information and knowledge with whom” (Serrat 40). It is a type of analysis that can reveal the relevance and positioning of actors and narratives within the data. In our network visualisation, we use the ForceAtlas2 algorithm (Jacomy et al.) to position the nodes. This algorithm makes more connected nodes “gravitate” towards each other; the more central a node is, the more connected it is to the rest of the network, roughly speaking (Figure 1). Highlighting the channels representing political parties shows, for example, that while the Dutch far right party FvD (FVDNL) is quite central (connected), this is not the case for the Flemish far right politician Dries Van Langenhove (kiesdries). This suggests that compared to a similar Flemish politician, the Dutch FvD is a more prominent part of the general conspiracist discussion – which is then perhaps more overtly politicised in the Dutch context. We can additionally discern channels that we might label “content aggregators”, which forward large numbers of messages from other channels but post comparatively little original content, occupying a relatively central place in the wider network. These content aggregators play an important structural role in the network, as other channels might forward messages from these collections on a “pick and choose” basis. More abstractly, they also serve to confirm thematic similarity between the channels messages are forwarded from, with the owners of the aggregator channels playing the role of a curator that collects interesting content about a certain topic of interest. Furthermore, our data reveal that the network is highly dynamic. As forwarded messages are timestamped, we can plot the graph at different moments in time. When comparing changes over a year, we can observe a significant growth in the number of channels that connect to the network particularly between 2020 and 2021 (see fig. 2). This growth, and the associated diversity of the network, can be attributed to Telegram’s role as a haven for actors that were deplatformed (or present themselves as targets of deplatforming and censorship) from other social media; a “platform of last resort”. Previous research has for instance indicated that a number of alt-right fringe actors moved to Telegram after being deplatformed from Facebook or Twitter (Rogers, “Deplatforming”). It can be hypothesised that events such as Donald Trump’s removal from Twitter around the time of the January 2021 Capitol riots might similarly have inspired other actors to move to Telegram in response to the platform policies of mainstream social media. Fig. 2: The channel network based on messages sent before June 2020 (334 nodes). Same layout and parameters as in Figure 1 (not to scale). Nodes also appearing in Figure 1 are highlighted. The structure of the network (in fig. 1) can also be used to discern ‘sub-communities’, which forward messages mutually but have relatively few links to the broader network. These can then be analysed qualitatively. This may then reveal that Flemish groups that oppose COVID-19 policies are less connected with the far right, whereas such groups that can be identified as Dutch seem to merge more easily with far-right channels. As discussed, this is also suggested by the position of FVDNL, the Dutch far-right political party Forum voor democratie (FvD) which is central in the network. On this level, this suggests that structurally, Dutch far-right parties are more explicitly affiliating themselves with conspiracy-related channels than Flemish parties. Actual textual analyses of the channels’ posts and images, however, could offer a more nuanced picture, whereby structurally unconnected channels might still share common harmful narratives, spanning anti-progressive discourse, anti-mainstream sentiments, anti-government discourse, and evocations of prominent conspiracy theories such as QAnon and The Great Reset. These structural analyses then present a number of possibilities for further content analysis, where one might for example “zoom in” on Dutch far-right groups in particular, and qualitatively study images posted therein to identify salient narratives and positions. Discussion Methodological Gains Variations of the proposed approach have been used in other work (e.g. Hashemi and Chahooki; La Morgia et al.). Most prominently, the Pushshift Telegram Dataset (Baumgartner et al.) comprises a large dataset of channel metadata, author metadata and messages. This dataset was collected by discovering new channels from an initial seed list of approximately 300 channels using forwarded messages, and then collecting messages from these channels. While there is great archival value in the resulting datasets, our approach differs from these earlier approaches in a number of instructive ways. Like other work we appropriate an affordance of Telegram – forwarded messages – for our own research purposes, but we purposely limit the extent of “following” these forwarded messages. Though one could keep following links indefinitely, not every link is a link that is structural to the distributed community of users that is of interest here. Though more extensive crawling might reveal ever more channels and associated data, these are also increasingly unlikely to be related to the initial topic of interest, and are in any case further removed from the users of the initial seed groups. For this reason, we use a relatively shallow crawl depth and only retain links up to two “hops” away from the initial seed channels. This trades a higher number of crawled channels for a higher likelihood that the captured channels are indeed relevant to the case study. The suitable crawl depth would differ from case to case. In our case, it was established empirically through pilot crawls, which were stopped once collected groups appeared to no longer be strongly connected to the initial seed groups by topic. Datasets of this type are often also difficult to reproduce or qualify. For example for the datasets compiled by La Morgia et al. as well as for Baumgartner et al., the original seed list is not provided. Because of this, it is impossible to see where the network of found groups originates and how it might be biased one way or another. We suggest that where possible, this seed list is documented and shared. In our case, this would be particularly important as the seeds represent an intentional and explicit bias; that is, Dutch-speaking conspiracy-themed and far-right groups. If the starting point of the crawl is documented, one could potentially re-collect the data later from the same starting points and compare results to those found initially, allowing for longitudinal analyses of the topology of these communities. Ethics The method described here does not deal with personally identifiable information (PII) per se; one can map the channel network on a structural level without collecting user data or analysing specific messages, when purely tracing the origin of forwarded messages. It should be noted, however, that in the process of collecting these structural data, one can potentially go further. For example, it is possible to scrape the full content of messages. When also including chat groups, user details including (user-provided) full names are also available. Their inclusion in (public) datasets should be subject to closer scrutiny than that of public channels, as the former may represent conversations had under the assumption that this conversation was more or less off the record; while the latter are explicitly intended for information broadcast. Even if many of these chat groups are technically public, we should consider that "even if users are aware of being observed by others, they do not consider the possibility that their actions and interactions may be documented and analysed in detail at a later occasion" (Sveningsson Elm 77). In many cases, a (structural) analysis of only channels strikes a good balance between collecting representative data and respecting the privacy of those who produce the collected data. Avenues for Future Research The method may be expanded in a number of ways. One could, as discussed, increase the amount of crawl iterations, which would expand the network at the potential cost of case-specificity. A larger seed list could also increase the quantity of the data, though the effect of this can often be limited, as a relatively limited amount of channels forward messages from other channels. Links between channels could be collected not only from forwarded messages, as we do here, but also via other repurposed Telegram features such as channel invitation links or simple hyperlinks to other channels found in message content. The latter would require more fine-grained parsing of the message texts through natural language processing, for example, as a hyperlink can suggest a wider range of connections than an intentionally forwarded message. Additionally, and as previously mentioned, one could include not only public channels but also public chat groups, which are often linked to these channels and offer a space for people to discuss the content posted in them. While this can be an attractive way of acquiring extra data, we forego this in our example. As discussed, there are ethical trade-offs to consider when deciding to work with data from groups; but, it can be argued that Telegram channels represent an explicit “broadcast” style of communication (Shehabat et al.). Because the channel owner(s) decide what is worthy of sharing, one can reasonably assume that if one “follows the medium” here, all content retrieved from a channel will be somewhat relevant to the channel's purported theme. Conversely, discourse in chat groups might be expected to meander into a variety of directions and can easily include many (forwarded) messages only tangentially related to the case study of interest. Conclusion In this article we have sought to present one methodological approach to studying communities on Telegram. Rather than presenting a thorough case study or a definitive analysis of the Telegram-based community we discuss, our goal was to demonstrate the method's benefits but also its potential shortcomings, avenues of further development, and what types of analysis data collected with it might afford. A cursory analysis of the fringe community we studied here shows how with such data one can map a given community or set of communities on a structural level, which may then be used to demarcate areas of interest for further content analysis. The observations presented in this article are far from a complete picture of the data collected, but can serve as suggestions for analytical avenues one might venture down in a more substantive analysis. Beyond these observations, our repurposing of “the methods of the medium” (Rogers, Digital Methods) through forwarded messages allows us to contribute an empirically informed reflection on the possibilities and limitations of studying conspiracist information sharing practices on Telegram. Our method for instance highlights tensions between public and private knowledge, whereby we only consider information from public channels, and for technical and ethical reasons omit Telegram’s closed-off, private chat groups from our analysis. Our method of sourcing channels through forwarded messages does not preclude the existence of isolated channels or clusters of channels that, for a lack of forwarded messages from channels that were already identified, elude the scope of such snowballing efforts. Along the same lines, one could imagine that a deeper, more far-reaching crawl would reveal some strange bedfellows for the initial seed that were not part of our a priori understanding and hypotheses concerning the communities of interest. All of these considerations represent choices that may be taken differently depending on the case study at hand. Statement on Data and Ethics The analysis on offer in this article is limited to names of public channels on Telegram and we purposely refrain from citing channel names or analysing specific messages so they cannot be traced back to single persons. Our analysis does not comprise live subjects or PII, and thus did not require ERB clearance from our respective institutions. The anonymised dataset described above is available upon request via Zenodo at https://doi.org/10.5281/zenodo.6344795. Acknowledgements We would like to thank Nathalie van Raemdonck (Vrije Universiteit Brussel) and Jasmin Seijbel (Erasmus Universiteit Rotterdam) for their contributions to the empirical work underlying this article. References Baumgartner, Jason, et al. “The Pushshift Telegram Dataset.” Proceedings of the International AAAI Conference on Web and Social Media 14 (2020): 840–847. Blondel, Vincent D., et al. “Fast Unfolding of Communities in Large Networks.” Journal of Statistical Mechanics: Theory and Experiment 2008.10 (2008): P10008. Davey, Jacob, and Dana Weinberg. Influence: Discussions of the US Military in Extreme Right-Wing Telegram Channels. ISD Global, 2021. De Zeeuw, Daniel et al. “Tracing Normiefication: A Cross-Platform Analysis of the QAnon Conspiracy Theory.” First Monday 25.1 (2020). Fenster, Mark. Conspiracy Theories. Minneapolis: U of Minnesota P, 2008. Frenkel, Sheera. “Facebook Amps Up Its Crackdown on QAnon.” The New York Times 6 Oct. 2020, sec. Technology. <https://www.nytimes.com/2020/10/06/technology/facebook-qanon-crackdown.html>. Gehl, Robert W. “The Case for Alternative Social Media.” Social Media + Society 1.2 (2015). Hashemi, Ali, and Mohammad Ali Zare Chahooki. “Telegram Group Quality Measurement by User Behavior Analysis.” Social Network Analysis and Mining 9.1 (2019): 33. Jacomy, Mathieu, et al. “ForceAtlas2, a Continuous Graph Layout Algorithm for Handy Network Visualization Designed for the Gephi Software.” PLOS ONE 9.6 (2014): e98679. La Morgia, Massimo et al. “Uncovering the Dark Side of Telegram: Fakes, Clones, Scams, and Conspiracy Movements.” arXiv:2111.13530 [cs] (2021). <http://arxiv.org/abs/2111.13530>. Nikkhah, Sarah, et al. “Coming to America: Iranians’ Use of Telegram for Immigration Information Seeking.” Aslib Journal of Information Management 72.4 (2020): 561–585. Rogers, Richard. “Deplatforming: Following Extreme Internet Celebrities to Telegram and Alternative Social Media.” European Journal of Communication 35.3 (2020): 213–229. ———. Digital Methods. Cambridge: MIT P, 2013. Serrat, Olivier. “Social Network Analysis.” In Knowledge Solutions: Tools, Methods, and Approaches to Drive Organizational Performance. Ed. Olivier Serrat. Singapore: Springer, 2017. 39–43. <https://doi.org/10.1007/978-981-10-0983-9_9>. Shehabat, Ahmad, Teodor Mitew, and Yahia Alzoubi. “Encrypted Jihad: Investigating the Role of Telegram App in Lone Wolf Attacks in the West.” Journal of Strategic Security 10.3 (2017): 27–53. Sveningsson Elm, Malin. “How Do Various Notions of Privacy Influence Decisions in Qualitative Internet Research?” In Internet Inquiry: Conversations about Method. Eds. Annette Markham and Nancy Baym. SAGE, 2009. 69–97. Urman, Aleksandra, and Stefan Katz. “What They Do in the Shadows: Examining the Far-Right Networks on Telegram.” Information, Communication & Society (2020). Venturini, Tommaso, and Bruno Latour. “The Social Fabric: Digital Footprints and Quali-quantitative Methods.” In Proceedings of Futur en Seine 2009: The Digital Future of the City. Festival for Digital Life and Creativity, 2010. 87-101. Vieten, Ulrike M. “The ‘New Normal’ and ‘Pandemic Populism’: The COVID-19 Crisis and Anti-Hygienic Mobilisation of the Far-Right.” Social Sciences 9.9 [165] (2020).
In the field of disinformation research, the study of antagonistic networks and discourse on the messaging platform Telegram has developed into an active area of investigation. To this end, recent literature has specifically set out to map the scale, scope, and narrative trends marking Telegram communities with ties to localised, European contexts. The present paper contributes to this line of inquiry by offering an empirically-informed exploration of far-right and conspiracist Telegram channels associated with Flanders and the Netherlands. Building on previous observations concerning the propagation of disinformation on social media, the paper proposes a typology of the antagonistic discourse and narratives that circulate within these public channels. It thereby seeks to reconcile the comprehensive perspectives afforded by ‘big data’ approaches with the analysis of Telegram in an event– and culture–specific context. Covering the period March 2017–July 2021, this paper specifically considers an inductively collected dataset of 215 public Telegram channels and 371,951 messages pertaining to the relevant contexts, and bridges gaps between quantitative and qualitative methods by combining visual network analysis with discourse analysis. This combined approach reveals an expanding, highly diverse and dynamic network of Telegram channels, marked by overlapping antagonistic narratives, including traces of international conspiracy theories such as ‘The Great Reset’ and QAnon. These observations contribute to our understanding of how an emerging ‘alt-tech’ platform harbours and interconnects antagonistic actors and narratives in a specific linguistic and political context.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.