Abstract:Artificial Intelligence-as-a-Service (AIaaS) empowers individuals and organisations to access AI on-demand, in either tailored or ‘off-the-shelf’ forms. However, institutional separation between development, training and deployment can lead to critical opacities, such as obscuring the level of human effort necessary to produce and train AI services. Information about how, where, and for whom AI services have been produced are valuable secrets, which vendors strategically disclose to clients depending on commer… Show more
“…Sociologists of work, occupations, and organizations have examined how new algorithmic systems reconfigure human labor, in ways that are often detrimental to workers (Bailey et al., 2020; Griesbach et al., 2019; Kellogg et al., 2020; Newlands, 2021; Shestakofsky, 2017). Utopian and dystopian predictions of robots taking and transforming human jobs have been the subject of discourse analysis (see James & Whelan, 2021; Ossewaarde & Gulenc, 2020; Vicsek, 2020), but sociological scholarship has been skeptical or critical of these claims, and more attentive to questions of power relations (Boyd & Holton, 2018).…”
Section: Social Inequality and Technology: The View From Sociologymentioning
Artificial intelligence (AI) and algorithmic systems have been criticized for perpetuating bias, unjust discrimination, and contributing to inequality. Artificial intelligence researchers have remained largely oblivious to existing scholarship on social inequality, but a growing number of sociologists are now addressing the social transformations brought about by AI. Where bias is typically presented as an undesirable characteristic that can be removed from AI systems, engaging with social inequality scholarship leads us to consider how these technologies reproduce existing hierarchies and the positive visions we can work towards. I argue that sociologists can help assert agency over new technologies through three kinds of actions: (1) critique and the politics of refusal;(2) fighting inequality through technology; and (3) governance of algorithms. As we become increasingly dependent on AI and automated systems, the dangers of further entrenching or amplifying social inequalities have been well documented, particularly with the growing adoption of these systems by government agencies. However, public policy also presents some opportunities to restructure social dynamics in a positive direction, as long as we can articulate what we are trying to achieve, and are aware of the risks and
“…Sociologists of work, occupations, and organizations have examined how new algorithmic systems reconfigure human labor, in ways that are often detrimental to workers (Bailey et al., 2020; Griesbach et al., 2019; Kellogg et al., 2020; Newlands, 2021; Shestakofsky, 2017). Utopian and dystopian predictions of robots taking and transforming human jobs have been the subject of discourse analysis (see James & Whelan, 2021; Ossewaarde & Gulenc, 2020; Vicsek, 2020), but sociological scholarship has been skeptical or critical of these claims, and more attentive to questions of power relations (Boyd & Holton, 2018).…”
Section: Social Inequality and Technology: The View From Sociologymentioning
Artificial intelligence (AI) and algorithmic systems have been criticized for perpetuating bias, unjust discrimination, and contributing to inequality. Artificial intelligence researchers have remained largely oblivious to existing scholarship on social inequality, but a growing number of sociologists are now addressing the social transformations brought about by AI. Where bias is typically presented as an undesirable characteristic that can be removed from AI systems, engaging with social inequality scholarship leads us to consider how these technologies reproduce existing hierarchies and the positive visions we can work towards. I argue that sociologists can help assert agency over new technologies through three kinds of actions: (1) critique and the politics of refusal;(2) fighting inequality through technology; and (3) governance of algorithms. As we become increasingly dependent on AI and automated systems, the dangers of further entrenching or amplifying social inequalities have been well documented, particularly with the growing adoption of these systems by government agencies. However, public policy also presents some opportunities to restructure social dynamics in a positive direction, as long as we can articulate what we are trying to achieve, and are aware of the risks and
“…AI verification involves the evaluation of algorithmic outputs. Finally, AI impersonation, often seen in the corporate and AI-asa-service sector [59], refers to the non-disclosed "'human-in-the-loop' principle that makes workers hardly distinguishable from algorithms" [81].…”
Section: Different Tasks Different Instructionsmentioning
Machine learning (ML) depends on data to train and verify models. Very often, organizations outsource processes related to data work (i.e., generating and annotating data and evaluating outputs) through business process outsourcing (BPO) companies and crowdsourcing platforms. This paper investigates outsourced ML data work in Latin America by studying three platforms in Venezuela and a BPO in Argentina. We lean on the Foucauldian notion of dispositif to define the data-production dispositif as an ensemble of discourses, actions, and objects strategically disposed to (re)produce power/knowledge relations in data and labor. Our dispositif analysis comprises the examination of 210 data work instruction documents, 55 interviews with data workers, managers, and requesters, and participant observation. Our findings show that discourses encoded in instructions reproduce and normalize the worldviews of requesters. Precarious working conditions and economic dependency alienate workers, making them obedient to instructions. Furthermore, discourses and social contexts materialize in artifacts, such as interfaces and performance metrics, limiting workers' agency and normalizing specific ways of interpreting data. We conclude by stressing the importance of counteracting the data-production dispositif by fighting alienation and precarization, and empowering data workers to become assets in the quest for high-quality data.
“…But in general, the presence of humans remains in the backstage, in order not to diminish the appeal of automation on which technology companies base their marketing. According to Newlands (2021), the curtain on these human contributions is only lifted when vendors must co-opt unpaid labour from users, for example to train localised chatbot services, and need to elicit their cooperation.…”
Section: Who Learns In Machine Learning? the Role Of Humansmentioning
Today's artificial intelligence, largely based on data-intensive machine learning algorithms, relies heavily on the digital labour of invisibilized and precarized humans-in-the-loop who perform multiple functions of data preparation, verification of results, and even impersonation when algorithms fail. Using original quantitative and qualitative data, the present article shows that these workers are highly educated, engage significant (sometimes advanced) skills in their activity, and earnestly learn alongside machines. However, the loop is one in which human workers are at a disadvantage as they experience systematic misrecognition of the value of their competencies and of their contributions to technology, the economy, and ultimately society. This situation hinders negotiations with companies, shifts power away from workers, and challenges the traditional balancing role of the salary institution.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.