This survey article assesses and compares existing critiques of current fairness-enhancing technical interventions in machine learning (ML) that draw from a range of non-computing disciplines, including philosophy, feminist studies, critical race and ethnic studies, legal studies, anthropology, and science and technology studies. It bridges epistemic divides in order to offer an interdisciplinary understanding of the possibilities and limits of hegemonic computational approaches to ML fairness for producing just outcomes for society’s most marginalized. The article is organized according to nine major themes of critique wherein these different fields intersect: 1) how "fairness" in AI fairness research gets defined; 2) how problems for AI systems to address get formulated; 3) the impacts of abstraction on how AI tools function and its propensity to lead to technological solutionism; 4) how racial classification operates within AI fairness research; 5) the use of AI fairness measures to avoid regulation and engage in ethics washing; 6) an absence of participatory design and democratic deliberation in AI fairness considerations; 7) data collection practices that entrench “bias,” are non-consensual, and lack transparency; 8) the predatory inclusion of marginalized groups into AI systems; and 9) a lack of engagement with AI’s long-term social and ethical outcomes. Drawing from these critiques, the article concludes by imagining future ML fairness research directions that actively disrupt entrenched power dynamics and structural injustices in society.
This article examines universities’ adoption of WellTrack—a self-tracking mobile phone application modeled on cognitive behavioral therapy techniques—as a solution to reported increases in the prevalence and severity of student mental health conditions. Drawing from a feminist materialist perspective that understands discourse and the material word as co-produced, this article argues that the app’s design, marketing, and reception are deeply intertwined with the political rationality of neoliberalism, which centers self-responsibility and the market economy. The article first situates the development of the WellTrack app within larger institutional and political shifts towards digital education governance. It then contextualizes the app’s use within the history of college mental health services in the United States, revealing how longstanding socio-medical discourses concerning student wellness individualize structural and systemic factors of ill health. Through close reading and immanent critique of the marketing and media reception of the WellTrack app’s introduction into universities, the article provides an account of how universities are socializing students to relinquish data to private firms in exchange for health services. Students are encouraged to engage in constant self-examination and strive towards a vision of student wellness that precludes an analysis of the structural conditions and intersecting oppressions contributing to poor student mental health. These conditions are social and pervasive, requiring institutional analysis and critique of the university itself.
This article examines university researchers’ capture of student images on US college campuses for training facial recognition technology, and situates this project within universities’ broader historical alignment with militarism and racial injustice. It argues that feminist STS ethics provides a framework for not only challenging the ways that university research inquiry actively contributes to oppressive power structures, but also for reimagining university research ethics for a greater engagement with questions of justice. The article identifies the limitations of dominant institutional ethics and privacy rights discourses for centering justice considerations, and instead outlines an intersectional feminist approach to university research ethics that reimagines the relationship between research processes, power, and social impacts.
Critical studies on logistics and supply chain management often focus on the transformations in the organization of labor that result from an emphasis on the circulation of commodities. However, target marketing and practices of leisure-time surveillance are not generally framed as part of the shift in capital’s emphasis on circulation. If part of logistical management is about the displacement of labor to the underdeveloped world, it is equally about monitoring circulation and demand in the overdeveloped. This paper argues that situating target marketing as a technology of logistical management emphasizes the importance of information in not only intensifying and maximizing the productivity of supply chains and reducing labor costs but also increasing the likelihood of a return on capitalist investment through the management of market choices. The paper begins by framing target marketing as part of the historical trajectory of the revolution in control described by James R. Beniger. I then demonstrate how target marketing provides valuable point-of-sale and point-of-interaction insights, and platform providers can wield this information to control prices and allocate advertisements as well as manage distribution and arbitrage the labor market. Rather than conceptualizing the production of user data as a form of labor in the context of target marketing, I argue that the labor theory of value is untenable for understanding the conditions of leisure-time surveillance and data aggregation. The category of labor is useful in that it highlights the exploitation of user data, but it tends to collapse distinctions between the workday and leisure-time in ways that mystify the differences in how capitalism exercises control over subjects. I then provide a close reading of an Amazon affiliated fulfillment center expose in order to examine precisely how the information produced during leisure-time surveillance intensifies the exploitation of fulfillment center labor. Target marketing is part of a larger apparatus that aggregates data for the purposes of assigning risk, differentiating prices, and managing supply chains and labor costs. It equally reinforces biases and discriminatory practices prevalent in financial institutions in order to maximize profit through the aggregation of data produced by users during seemingly innocuous acts of consumption and online attentiveness.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.