A key goal of the fair-ML community is to develop machine-learning based systems that, once introduced into a social context, can achieve social and legal outcomes such as fairness, justice, and due process. Bedrock concepts in computer science-such as abstraction and modular design-are used to define notions of fairness and discrimination, to produce fairness-aware learning algorithms, and to intervene at different stages of a decision-making pipeline to produce "fair" outcomes. In this paper, however, we contend that these concepts render technical interventions ineffective, inaccurate, and sometimes dangerously misguided when they enter the societal context that surrounds decision-making systems. We outline this mismatch with five "traps" that fair-ML work can fall into even as it attempts to be more context-aware in comparison to traditional data science. We draw on studies of sociotechnical systems in Science and Technology Studies to explain why such traps occur and how to avoid them. Finally, we suggest ways in which technical designers can mitigate the traps through a refocusing of design in terms of process rather than solutions, and by drawing abstraction boundaries to include social actors rather than purely technical ones. CCS CONCEPTS • Applied computing → Law, social and behavioral sciences; • Computing methodologies → Machine learning;
Based on more than 2 years of ethnographic immersion with the Mars Exploration Rover mission, this paper examines the representational work and associated embodied practices through which the science and engineering team makes decisions about how and where to move their robots. Building on prior work in Science and Technology Studies on the importance of embodiment to visualization, the paper posits that such practices also contribute to the production and maintenance of social order within the organizational context of the laboratory. It thus places visualization technologies and techniques in the context of the social organization of scientific work, contributing to our understanding of representation in scientific practice.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.