Responsibility for opinions expressed in signed articles rests solely with their authors, and publication does not constitute an endorsement by the ILO. This article is also available in French, in Revue internationale du Travail 161 (3), and Spanish, in Revista Internacional del Trabajo 141 (3).
Critical research into the gig economy frequently relies on using platform interfaces, platform mobile applications or websites, as intermediaries to contact and recruit participants. Yet, these methods are accompanied by significant ethical implications that are rarely considered. In this article, we look at the organisational features of platform interfaces for research and explore the ways in which, through their intensive knowledge about their users, they present additional challenges to researchers’ abilities to (a) conduct independent research – for example by influencing the participant recruitment process and (b) establish and maintain respondent anonymity and researcher transparency. Our analysis is based on an international study of platform workers which investigates working conditions and fairness in the gig economy in both geographically tethered gig work and cloudwork. We argue that the ethical boundaries of doing research through platform interfaces are shaped not only by researchers, but also by the platforms whose interfaces researchers use. Establishing and protecting the anonymity of research participants provides an acute example of this, as platforms have the potential to scrutinise the activities of researchers on their interfaces, and capture information shared between researchers and participants. The question of anonymity arises also in the reverse order: when platforms share personal information on workers, at a level not required by researchers. After building our argument, we propose a set of suggestions for promoting ethical research in the study of gig economy platforms.
Calls for “ethical Artificial Intelligence” are legion, with a recent proliferation of government and industry guidelines attempting to establish ethical rules and boundaries for this new technology. With few exceptions, they interpret Artificial Intelligence (AI) ethics narrowly in a liberal political framework of privacy concerns, transparency, governance and non-discrimination. One of the main hurdles to establishing “ethical AI” remains how to operationalize high-level principles such that they translate to technology design, development and use in the labor process. This is because organizations can end up interpreting ethics in an ad-hoc way with no oversight, treating ethics as simply another technological problem with technological solutions, and regulations have been largely detached from the issues AI presents for workers. There is a distinct lack of supra-national standards for fair, decent, or just AI in contexts where people depend on and work in tandem with it. Topics such as discrimination and bias in job allocation, surveillance and control in the labor process, and quantification of work have received significant attention, yet questions around AI and job quality and working conditions have not. This has left workers exposed to potential risks and harms of AI. In this paper, we provide a critique of relevant academic literature and policies related to AI ethics. We then identify a set of principles that could facilitate fairer working conditions with AI. As part of a broader research initiative with the Global Partnership on Artificial Intelligence, we propose a set of accountability mechanisms to ensure AI systems foster fairer working conditions. Such processes are aimed at reshaping the social impact of technology from the point of inception to set a research agenda for the future. As such, the key contribution of the paper is how to bridge from abstract ethical principles to operationalizable processes in the vast field of AI and new technology at work.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.