Our daily digital life is full of algorithmically selected content such as social media feeds, recommendations and personalized search results. These algorithms have great power to shape users' experiences, yet users are often unaware of their presence. Whether it is useful to give users insight into these algorithms' existence or functionality and how such insight might affect their experience are open questions. To address them, we conducted a user study with 40 Facebook users to examine their perceptions of the Facebook News Feed curation algorithm. Surprisingly, more than half of the participants (62.5%) were not aware of the News Feed curation algorithm's existence at all. Initial reactions for these previously unaware participants were surprise and anger. We developed a system, FeedVis, to reveal the difference between the algorithmically curated and an unadulterated News Feed to users, and used it to study how users perceive this difference. Participants were most upset when close friends and family were not shown in their feeds. We also found participants often attributed missing stories to their friends' decisions to exclude them rather than to Facebook News Feed algorithm. By the end of the study, however, participants were mostly satisfied with the content on their feeds. Following up with participants two to six months after the study, we found that for most, satisfaction levels remained similar before and after becoming aware of the algorithm's presence, however, algorithmic awareness led to more active engagement with Facebook and bolstered overall feelings of control on the site.
Algorithms exert great power in curating online information, yet are often opaque in their operation, and even existence. Since opaque algorithms sometimes make biased or deceptive decisions, many have called for increased transparency. However, little is known about how users perceive and interact with potentially biased and deceptive opaque algorithms. What factors are associated with these perceptions, and how does adding transparency into algorithmic systems change user attitudes? To address these questions, we conducted two studies: 1) an analysis of 242 users' online discussions about the Yelp review filtering algorithm and 2) an interview study with 15 Yelp users disclosing the algorithm's existence via a tool. We found that users question or defend this algorithm and its opacity depending on their engagement with and personal gain from the algorithm. We also found adding transparency into the algorithm changed users' attitudes towards the algorithm: users reported their intention to either write for the algorithm in future reviews or leave the platform.
Awareness of bias in algorithms is growing among scholars and users of algorithmic systems. But what can we observe about how users discover and behave around such biases? We used a cross-platform audit technique that analyzed online ratings of 803 hotels across three hotel rating platforms and found that one site’s algorithmic rating system biased ratings, particularly low-to-medium quality hotels, significantly higher than others (up to 37%). Analyzing reviews of 162 users who independently discovered this bias, we seek to understand if, how, and in what ways users perceive and manage this bias. Users changed the typical ways they used a review on a hotel rating platform to instead discuss the rating system itself and raise other users’ awareness of the rating bias. This raising of awareness included practices like efforts to reverse-engineer the rating algorithm, efforts to correct the bias, and demonstrations of broken trust. We conclude with a discussion of how such behavior patterns might inform design approaches that anticipate unexpected bias and provide reliable means for meaningful bias discovery and response.
Content moderation systems for social media have had numerous issues of bias, in terms of race, gender, and ability among many others. One proposal for addressing such issues in automated decision making is by designing for contestability, whereby users can shape and influence how decisions are made. In this study, we conduct a series of participatory design workshops with participants from communities that have experienced problems with social media content moderation in the past. Together with participants, we explore the idea of designing for contestability in content moderation and find that users' designs suggest three fruitful, practical avenues: adding representation, improving communication, and designing with compassion. We conclude with design recommendations drawn from participants' proposals, and reflect on the challenges that remain.CCS Concepts: • Human-centered computing → Human computer interaction (HCI); Social media.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.