The proliferation of harmful and offensive content is a problem that many online platforms face today. One of the most common approaches for moderating offensive content online is via the identification and removal after it has been posted, increasingly assisted by machine learning algorithms. More recently, platforms have begun employing moderation approaches which seek to intervene prior to offensive content being posted. In this paper, we conduct an online randomized controlled experiment on Twitter to evaluate a new intervention that aims to encourage participants to reconsider their offensive content and, ultimately, seeks to reduce the amount of offensive content on the platform. The intervention prompts users who are about to post harmful content with an opportunity to pause and reconsider their Tweet. We find that users in our treatment prompted with this intervention posted 6% fewer offensive Tweets than non-prompted users in our control. This decrease in the creation of offensive content can be attributed not just to the deletion and revision of prompted Tweets - we also observed a decrease in both the number of offensive Tweets that prompted users create in the future and the number of offensive replies to prompted Tweets. We conclude that interventions allowing users to reconsider their comments can be an effective mechanism for reducing offensive content online.
Digital interventions for prosocial behavior, modifications in the design of a platform’s architecture or rules to foster prosocial interactions online, are increasingly researched by social scientists. However, academic insights remain largely underused by practitioners, reducing their impact on real world applications. In the present paper, we propose a conceptual framework for digital interventions. We classify them into three categories – proactive, interactive, and reactive – based on the timing of the intervention relative to the behavior being moderated. For each category, we present digital, scalable, automated, and scientifically tested interventions as directly applicable examples. The present framework may make existing scientific findings more accessible and, ultimately, more relevant to practitioners and the digital community, and can provide researchers with starting points for further scientific exploration.
This study tests whether the architecture of a social media platform can encourage conversations among users to be more civil. It was conducted in collaboration with Nextdoor, a networking platform for neighbors within a defined geographic area. The study involved: (1) prompting users to move popular posts from the neighborhood-wide feed to new groups dedicated to the topic and (2) an experiment that randomized the announcement of community guidelines to members who join those newly formed groups. We examined the impact of each intervention on the level of civility, moral values reflected in user comments, and user’s submitted reports of inappropriate content. In a large quantitative analysis of comments posted to Nextdoor, the results indicate that platform architecture can shape the civility of conversations. Comments within groups were more civil and less frequently reported to Nextdoor moderators than the comments on the neighborhood-wide posts. In addition, comments in groups where new members were shown guidelines were less likely to be reported to moderators and were expressed in a more morally virtuous tone than comments in groups where new members were not presented with guidelines. This research demonstrates the importance of considering the design, structure, and affordance of the online environment when online platforms seek to promote civility and other pro-social behaviors.
Online platforms are increasingly being held to account for the content that their users post. Regulation of content has long been a secondary concern of platforms, but more recently as platforms focus on their content governance, they have typically drawn their regulatory model from offline legal frameworks built around sanctioning and punishment of rule violators. This study approaches these problems using an alternative approach, also drawn from legal scholarship, that is based upon motivating voluntary rule following by emphasizing the fairness of platform rules and the justice of the processes used to communicate content moderation decisions. Using a survey (n=10,487) sent to rule violators on Twitter paired with an analysis of participants’ platform behaviors, this study looks at the relationship between people’s judgments of the procedural justice of an enforcement action and the participants’ likelihood of reoffending in the future. We find that those who felt more fairly treated during their enforcement were less likely to recidivate (beta = -.05, p < .001). This, along with the study’s other findings, indicates an opportunity for platforms to put a stronger focus on people’s experience with enforcement systems as a potential pathway for reducing recidivism.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.