Abstract:This paper studies the efficacy of the Reddit's quarantine, increasingly implemented in the platform as a means of restricting and reducing misogynistic and other hateful material. Using the case studies of r/TheRedPill and r/Braincels, the paper argues the quarantine successfully cordoned off affected subreddits and associated hateful material from the rest of the platform. It did not, however, reduce the levels of hateful material within the affected spaces. Instead many users reacted by leaving Reddit for l… Show more
“…It is also important to note that we find there to be minimal filtering of extreme content on both Reddit and Gab. However, we do find evidence of both extreme and fringe content on all three platforms-supported by research which posits the far-right as existing on the sites (O'Callaghan et al, 2015;Berger, 2018b;Lewis, 2018;Conway, Scrivens, and Macnair, 2019;Nouri, Lorenzo-Dus, and Watkin, 2019;Copland, 2020;Gaudette et al, 2020). For Reddit and Gab, the lack of algorithmic promotion suggests that there are other factors, possibly related to the platforms' other affordances or their user bases, that drive extreme content.…”
Section: Policy Discussionsupporting
confidence: 76%
“…Reddit (Conway, 2016;Copland, 2020;Gaudette et al, 2020), and Gab (Berger, 2018b;Conway, Scrivens, and Macnair, 2019;Nouri, Lorenzo-Dus, and Watkin, 2019).…”
Policymakers have recently expressed concerns over the role of recommendation algorithms and their role in forming "filter bubbles". This is a particularly prescient concern in the context of extremist content online; these algorithms may promote extremist content at the expense of more moderate voices. In this article, we make two contributions to this debate. Firstly, we provide a novel empirical analysis of three platforms' recommendation systems when interacting with far-right content. We find that one platform-YouTube-does amplify extreme and fringe content, while two-Reddit and Gab-do not. Secondly, we contextualise these findings into the regulatory debate. There are currently few policy instruments for dealing with algorithmic amplification, and those that do exist largely focus on transparency. We argue that policymakers have yet to fully understand the problems inherent in "de-amplifying" legal, borderline content and argue that a co-regulatory approach may offer a route towards tackling many of these challenges.
“…It is also important to note that we find there to be minimal filtering of extreme content on both Reddit and Gab. However, we do find evidence of both extreme and fringe content on all three platforms-supported by research which posits the far-right as existing on the sites (O'Callaghan et al, 2015;Berger, 2018b;Lewis, 2018;Conway, Scrivens, and Macnair, 2019;Nouri, Lorenzo-Dus, and Watkin, 2019;Copland, 2020;Gaudette et al, 2020). For Reddit and Gab, the lack of algorithmic promotion suggests that there are other factors, possibly related to the platforms' other affordances or their user bases, that drive extreme content.…”
Section: Policy Discussionsupporting
confidence: 76%
“…Reddit (Conway, 2016;Copland, 2020;Gaudette et al, 2020), and Gab (Berger, 2018b;Conway, Scrivens, and Macnair, 2019;Nouri, Lorenzo-Dus, and Watkin, 2019).…”
Policymakers have recently expressed concerns over the role of recommendation algorithms and their role in forming "filter bubbles". This is a particularly prescient concern in the context of extremist content online; these algorithms may promote extremist content at the expense of more moderate voices. In this article, we make two contributions to this debate. Firstly, we provide a novel empirical analysis of three platforms' recommendation systems when interacting with far-right content. We find that one platform-YouTube-does amplify extreme and fringe content, while two-Reddit and Gab-do not. Secondly, we contextualise these findings into the regulatory debate. There are currently few policy instruments for dealing with algorithmic amplification, and those that do exist largely focus on transparency. We argue that policymakers have yet to fully understand the problems inherent in "de-amplifying" legal, borderline content and argue that a co-regulatory approach may offer a route towards tackling many of these challenges.
“…O subreddit /r/TheRedPill foi colocado em quarentena em 28 de setembro de 2018 devido a uma escalada de seu discurso violento. Para o Reddit, a quarentena dessa comunidade é reveladora de uma pressão do público, de instâncias legislativas e dos anunciantes para a diminuição do ódio favorecido pela retórica de livre expressão da plataforma (Copland, 2020). Para Simon Copland 22 A metáfora de desplugar é uma referência ao filme The Matrix.…”
Section: Dossiêunclassified
“…Por alto, a manosphere se divide entre facções de masculinidades conservadoras, ou alpha, e masculinidades nerds, ou betas (Ging, 2017); o que nos parece um espelho imperfeito da relação entre altright e alt-light, respectivamente. E ainda, essa dinâmica interna de masculinidades parece bem afinada com as discussões de Raewyn Connel (2013) Há ainda outros grupos que parecem se constituir em masculinidades híbridas das anteriores, como é o caso do Men Going Their Own Way 29 (MGTOW), que deu nome a uma comunidade homônima no Reddit colocada em quarentena em 2019 (Copland, 2020). Diz-se também de agrupamentos que estão teorizando uma nova masculinidade que "transcenderia" a biologia e que não teria espaço na Dossiê Guerras Culturais -https://revistaecopos.eco.ufrj.br/ ISSN 2175-8689 -v. 24, n. 2, 2021 DOI: 10.29146/ecopos.v24i2.27703 atual hierarquia.…”
Neste artigo, focamos em um conjunto multiplataforma de comunidades online que é autodenominado "manosphere" e que se vale de estratégias de inversão de opressões relacionadas a gêneros, raças, sexualidades e classes para construir suas masculinidades. Buscando compreender a dinâmica transnacional da manosphere e as encruzilhadas contextuais da "machosfera" brasileira, discutimos suas articulações com as Novas Direitas, sua sobreposição com alt-right e sua participação nas guerras culturais online, assim como os entrelaçamentos entre as práticas de homossocialidade e as plataformas baseadas no anonimato, como o Reddit. Introduzimos ainda a "filosofia" da Pílula Vermelha e sua relação com práticas (sub)culturais masculinistas de assédio. Na seção final do trabalho, apontamos chaves de observação que, em pesquisas futuras, podem nos ajudar a desvendar a machosfera brasileira, como a fluidez das masculinidades e posição contextual do Brasil no cenário transnacional de plataformas.Palavras-chave: manosphere – machosfera – guerras culturais online – plataformas anonimizadas – masculinidades
“…Next we have Simon Copland's paper 'Reddit quarantined: can changing platform affordances reduce hateful material online?' (Copland, 2020), which proposes an analytical reflection on the question: how can a digital platform known as a bastion of free speech, one of the last giants to resist homogeneity (which comes with the in-herent price of having to "stomach" the occasional troll reddit, in the words of Erik Martin, former Reddit CEO) respond to the increasing pressure to regulate abusive language and online behaviour? Reddit was imagined as a place for open and honest conversations; however, these days, the 'trolls' seem to be winning.…”
The internet is the digital reincarnation of a Greek agora or a Roman forum. It works as a "place" for public and private life. As such, it requires reliable, trustful rules to govern the daily routine of its visitors/users. The governance of the internet has gone through a significant (if not tectonic) change since its standardisation. This is clearly reflected by the changes in the concept of trust as well. Historically, trust reflected the concerns of internet users regarding the intrusion of governments into the neutral functioning of this "place". As of now, concerns regarding trust are equally present at the macro and micro level. Trust in platforms and in the content made available through the internet is at the centre of disputes nowadays. This editorial intends to provide for a selected introduction of the macro-and micro-level aspects of trust in the system and trust in the content, including content moderation, copyright law, fake news, game-making, hateful materials, leaking, social media and VPNs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.