It is becoming more common that the decision-makers in private and public institutions are predictive algorithmic systems, not humans. This article argues that relying on algorithmic systems is procedurally unjust in contexts involving background conditions of structural injustice. Under such nonideal conditions, algorithmic systems, if left to their own devices, cannot meet a necessary condition of procedural justice, because they fail to provide a sufficiently nuanced model of which cases count as relevantly similar. Resolving this problem requires deliberative capacities uniquely available to human agents. After exploring the limitations of existing formal algorithmic fairness strategies, the article argues that procedural justice requires that human agents relying wholly or in part on algorithmic systems proceed with caution: by avoiding doxastic negligence about algorithmic outputs, by exercising deliberative capacities when making similarity judgments, and by suspending belief and gathering additional information in light of higher-order uncertainty.
Policymakers and researchers consistently call for greater human accountability for AI technologies. We should be clear about two distinct features of accountability.Across the AI ethics and global policy landscape, there is consensus that there should be human accountability for AI technologies 1 . These machines are used for high-stakes decision-making in complex domains -for example, in healthcare, criminal justice and transport -where they can cause or occasion serious harm. Some use deep machine learning models, which can make their outputs difficult to understand or contest. At the same time, when the datasets on which these models are trained reflect bias against specific demographic groups, the bias becomes encoded and causes disparate impacts 2-4 . Meanwhile, an increasing number of machines that embody AI, and specifically machine learning, such as highly automated vehicles, can execute decision-making functions and take actions independently of direct, real-time human control, in unpredictable conditions that call for adaptive performance. This development can make human agency seem obscure. Considering these problems, a heterogeneous group of researchers and organizations have called for stronger, more explicit regulation and guidelines to ensure accountability for AI and autonomous systems 1,[5][6][7] .But what do we mean by 'accountability', and do we all mean the same thing? Accountability comes in different forms and varieties across rich and overlapping strands of academic literature in the humanities, law and social sciences. Scholars in the AI ethics field have recently proposed systematic conceptualizations of accountability to address this complexity [8][9][10][11] . Several researchers in the field 8,10 take explicit inspiration from Bovens's influential analysis of accountability as a social relation, in which he describes accountability as: "a relationship between an actor and a forum, in which the actor has an obligation to explain and to justify his or her conduct, the forum can pose questions and pass judgement, and the actor may face consequences" 12 .A welcome development within the AI ethics landscape would be greater conceptual clarity on the distinction between the 'explaining' and 'facing the consequences' features of accountability, as well as the relation between them.This matters ethically, legally and politically, as these two core features of accountability -that is, giving an explanation, and facing the consequences -can come apart and pull in different directions. We highlight them because, as the quotation illustrates, they represent a central bifurcation of the concept of accountability 12,13 . In addition, their relation is particularly complex when it comes to AI technologies.
The democratic boundary problem raises the question of who has democratic participation rights in a given polity and why. One possible solution to this problem is the all-affected principle (AAP), according to which a polity ought to enfranchise all persons whose interests are affected by the polity’s decisions in a morally significant way. While AAP offers a plausible principle of democratic enfranchisement, its supporters have so far not paid sufficient attention to economic participation rights. I argue that if one commits oneself to AAP, one must also commit oneself to the view that political participation rights are not necessarily the only, and not necessarily the best, way to protect morally weighty interests. I also argue that economic participation rights raise important worries about democratic accountability, which is why their exercise must be constrained by a number of moral duties.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.