Amazon Mechanical Turk (AMT) is an online labor market that defines itself as “a marketplace for work that requires human intelligence.” Early advocates and developers of crowdsourcing platforms argued that crowdsourcing tasks are designed so people of any skill level can do this labor online. However, as the popularity of crowdsourcing work has grown, the crowdsourcing literature has identified a peculiar issue: that work quality of workers is not responsive to changes in price. This means that unlike what economic theory would predict, paying crowdworkers higher wages does not lead to higher quality work. This has led some to believe that platforms, like AMT, attract poor quality workers. This article examines different market dynamics that might, unwittingly, contribute to the inefficiencies in the market that generate poor work quality. We argue that the cultural logics and socioeconomic values embedded in AMT's platform design generate a greater amount of market power for requesters (those posting tasks) than for individuals doing tasks for pay (crowdworkers). We attribute the uneven distribution of market power among participants to labor market frictions, primarily characterized by uncompetitive wage posting and incomplete information. Finally, recommendations are made for how to tackle these frictions when contemplating the design of an online labor market.
As machine learning and data science applications grow ever more prevalent, there is an increased focus on data sharing and open data initiatives, particularly in the context of the African continent. Many argue that data sharing can support research and policy design to alleviate poverty, inequality, and derivative effects in Africa. Despite the fact that the datasets in question are often extracted from African communities, conversations around the challenges of accessing and sharing African data are too often driven by non-African stakeholders. These perspectives frequently employ a deficit narratives, often focusing on lack of education, training, and technological resources in the continent as the leading causes of friction in the data ecosystem.We argue that these narratives obfuscate and distort the full complexity of the African data sharing landscape. In particular, we use storytelling via fictional personas built from a series of interviews with African data experts to complicate dominant narratives and to provide counternarratives. Coupling these personas with research on data practices within the continent, we identify recurring barriers to data sharing as well as inequities in the distribution of data sharing benefits. In particular, we discuss issues arising from power imbalances resulting from the legacies of colonialism, ethno-centrism, and slavery, disinvestment in building trust, lack of acknowledgement of historical and present-day extractive practices, and Western-centric policies that are ill-suited to the African context. After outlining these problems, we discuss avenues for addressing them when sharing data generated in the continent. CCS CONCEPTS• Computing methodologies → Artificial intelligence; • Social and professional topics → Government technology policy.
The Negro Motorist Green Book was a tool used by the Black community to navigate systemic racism throughout the U.S. and around the world. Whether providing its users with safer roads to take or businesses that were welcoming to Black patrons, The Negro Motorist Green Book fostered pride and created a physical network of safe spaces within the Black community. Building a bridge between this artifact which served Black people for thirty years and the current moment, we explore Black Twitter as an online space where the Black community navigates identity, activism, racism, and more. Through interviews with people who engage with Black Twitter, we surface the benefits (such as community building, empowerment, and activism) and challenges (like dealing with racism, appropriation, and outsiders) on the platform, juxtaposing the Green Book as a historical artifact and Black Twitter as its contemporary counterpart. Equipped with these insights, we make suggestions including audience segmentation, privacy controls, and involving historically disenfranchised perspectives into the technological design process. These proposals have implications for the design of technologies that would serve Black communities by amplifying Black voices and bolstering work toward justice.CCS Concepts: • Human-centered computing → Empirical studies in collaborative and social computing; Computer supported cooperative work.
Algorithmic systems help manage the governance of digital platforms featuring user-generated content, including how money is distributed to creators from the profits a platform earns from advertising on this content. However, creators producing content about disadvantaged populations have reported that these kinds of systems are biased, having associated their content with prohibited or unsafe content, leading to what creators believed were error-prone decisions to demonetize their videos. Motivated by these reports, we present the results of 20 interviews with YouTube creators and a content analysis of videos, tweets, and news about demonetization cases to understand YouTubers' perceptions of demonetization affecting videos featuring disadvantaged or vulnerable populations, as well as creator responses to demonetization, and what kinds of tools and infrastructure support they desired. We found creators had concerns about YouTube's algorithmic system stereotyping content featuring vulnerable demographics in harmful ways, for example by labeling it "unsafe'' for children or families -- creators believed these demonetization errors led to a range of economic, social, and personal harms. To provide more context to these findings, we analyzed and report on the technique a few creators used to audit YouTube's algorithms to learn what could cause the demonetization of videos featuring LGBTQ people, culture and/or social issues. In response to the varying beliefs about the causes and harms of demonetization errors, we found our interviewees wanted more reliable information and statistics about demonetization cases and errors, more control over their content and advertising, and better economic security.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.