Abstract:Bots are estimated to account for well over half of all web traffic, yet they remain an understudied topic in HCI. In this paper we present the findings of an analysis of 2284 submissions across three discussion groups dedicated to the request, creation and discussion of bots on Reddit. We set out to examine the qualities and functionalities of bots and the practical and social challenges surrounding their creation and use. Our findings highlight the prevalence of misunderstandings around the capabilities of b… Show more
“…Reddit moderators are volunteer Reddit users who take on the responsibility of maintaining their communities by participating in a variety of tasks. These tasks include (1) coordinating with one another to determine policies and policy changes that guide moderation decisions, (2) checking submissions, threads, and content flagged by users for rule violations, (3) replying to user inquiries and complaints, 6 (4) recruiting new moderators, (5) inviting high-profile individuals to conduct AMA (Ask Me Anything) sessions [80], (6) creating bots [70] or editing Automod rules (described below in this section) to help automate moderation tasks, and (7) improving the design of the subreddit using CSS tools. Moderators usually prefer to focus primarily on a few of these Human-Machine Collaboration for Content Regulation 31:5 Fig.…”
Section: Study Context: Reddit Moderationmentioning
What one may say on the internet is increasingly controlled by a mix of automated programs, and decisions made by paid and volunteer human moderators. On the popular social media site Reddit, moderators heavily rely on a configurable, automated program called "Automoderator" (or "Automod"). How do moderators use Automod? What advantages and challenges does the use of Automod present? We participated as Reddit moderators for over a year, and conducted interviews with 16 moderators to understand the use of Automod in the context of the sociotechnical system of Reddit. Our findings suggest a need for audit tools to help tune the performance of automated mechanisms, a repository for sharing tools, and improving the division of labor between human and machine decision making. We offer insights that are relevant to multiple stakeholders-creators of platforms, designers of automated regulation systems, scholars of platform governance, and content moderators.
“…Reddit moderators are volunteer Reddit users who take on the responsibility of maintaining their communities by participating in a variety of tasks. These tasks include (1) coordinating with one another to determine policies and policy changes that guide moderation decisions, (2) checking submissions, threads, and content flagged by users for rule violations, (3) replying to user inquiries and complaints, 6 (4) recruiting new moderators, (5) inviting high-profile individuals to conduct AMA (Ask Me Anything) sessions [80], (6) creating bots [70] or editing Automod rules (described below in this section) to help automate moderation tasks, and (7) improving the design of the subreddit using CSS tools. Moderators usually prefer to focus primarily on a few of these Human-Machine Collaboration for Content Regulation 31:5 Fig.…”
Section: Study Context: Reddit Moderationmentioning
What one may say on the internet is increasingly controlled by a mix of automated programs, and decisions made by paid and volunteer human moderators. On the popular social media site Reddit, moderators heavily rely on a configurable, automated program called "Automoderator" (or "Automod"). How do moderators use Automod? What advantages and challenges does the use of Automod present? We participated as Reddit moderators for over a year, and conducted interviews with 16 moderators to understand the use of Automod in the context of the sociotechnical system of Reddit. Our findings suggest a need for audit tools to help tune the performance of automated mechanisms, a repository for sharing tools, and improving the division of labor between human and machine decision making. We offer insights that are relevant to multiple stakeholders-creators of platforms, designers of automated regulation systems, scholars of platform governance, and content moderators.
“…Each subreddit is regulated by a volunteer group of users called moderators who participate in a variety of tasks that include creating subreddit rules, removing content that violates rules, and responding to user inquiries and complaints. Moderators also use automated tools or bots that assist them in enacting a variety of moderation tasks [27,39]. Part of moderators' work includes configuring automated tools for moderation, and verifying whether these tools are operating as expected [27].…”
Section: Study Context: Reddit Moderationmentioning
When posts are removed on a social media platform, users may or may not receive an explanation. What kinds of explanations are provided? Do those explanations matter? Using a sample of 32 million Reddit posts, we characterize the removal explanations that are provided to Redditors, and link them to measures of subsequent user behaviors---including future post submissions and future post removals. Adopting a topic modeling approach, we show that removal explanations often provide information that educate users about the social norms of the community, thereby (theoretically) preparing them to become a productive member. We build regression models that show evidence of removal explanations playing a role in future user activity. Most importantly, we show that offering explanations for content moderation reduces the odds of future post removals. Additionally, explanations provided by human moderators did not have a significant advantage over explanations provided by bots for reducing future post removals. We propose design solutions that can promote the efficient use of explanation mechanisms, reflecting on how automated moderation tools can contribute to this space. Overall, our findings suggest that removal explanations may be under-utilized in moderation practices, and it is potentially worthwhile for community managers to invest time and resources into providing them.
“…In line with previous adoptions of this approach for analysis [37,66,81], two authors independently and inductively coded all comments at the comment level for both semantic and latent meaning. These authors then iteratively refined this list of codes together into themes connecting across the comments, then shared these themes with the other authors for finalization.…”
AI image captioning challenges encourage broad participation in designing algorithms that automatically create captions for a variety of images and users. To create large datasets necessary for these challenges, researchers typically employ a shared crowdsourcing task design for image captioning. This paper discusses findings from our thematic analysis of 1,064 comments left by Amazon Mechanical Turk workers using this task design to create captions for images taken by people who are blind. Workers discussed difficulties in understanding how to complete this task, provided suggestions of how to improve the task, gave explanations or clarifications about their work, and described why they found this particular task rewarding or interesting. Our analysis provides insights both into this particular genre of task as well as broader considerations for how to employ crowdsourcing to generate large datasets for developing AI algorithms.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.