Algorithms are now regularly used to decide whether defendants awaiting trial are too dangerous to be released back into the community. In some cases, black defendants are substantially more likely than white defendants to be incorrectly classi ed as high risk. To mitigate such disparities, several techniques have recently been proposed to achieve algorithmic fairness. Here we reformulate algorithmic fairness as constrained optimization: the objective is to maximize public safety while satisfying formal fairness constraints designed to reduce racial disparities. We show that for several past de nitions of fairness, the optimal algorithms that result require detaining defendants above race-speci c risk thresholds. We further show that the optimal unconstrained algorithm requires applying a single, uniform threshold to all defendants. e unconstrained algorithm thus maximizes public safety while also satisfying one important understanding of equality: that all individuals are held to the same standard, irrespective of race. Because the optimal constrained and unconstrained algorithms generally di er, there is tension between improving public safety and satisfying prevailing notions of algorithmic fairness. By examining data from Broward County, Florida, we show that this trade-o can be large in practice. We focus on algorithms for pretrial release decisions, but the principles we discuss apply to other domains, and also to human decision makers carrying out structured decision rules. ACM Reference format:1 We consider racial disparities because they have been at the center of many recent debates in criminal justice, but the same logic applies across a range of possible a ributes, including gender. arXiv:1701.08230v4 [cs.CY]
The nascent field of fair machine learning aims to ensure that decisions guided by algorithms are equitable. Over the last several years, three formal definitions of fairness have gained prominence: (1) anti-classification, meaning that protected attributes-like race, gender, and their proxies-are not explicitly used to make decisions; (2) classification parity, meaning that common measures of predictive performance (e.g., false positive and false negative rates) are equal across groups defined by the protected attributes; and (3) calibration, meaning that conditional on risk estimates, outcomes are independent of protected attributes. Here we show that all three of these fairness definitions suffer from significant statistical limitations. Requiring anticlassification or classification parity can, perversely, harm the very groups they were designed to protect; and calibration, though generally desirable, provides little guarantee that decisions are equitable. In contrast to these formal fairness criteria, we argue that it is often preferable to treat similarly risky people similarly, based on the most statistically accurate estimates of risk that one can produce. Such a strategy, while not universally applicable, often aligns well with policy objectives; notably, this strategy will typically violate both anti-classification and classification parity. In practice, it requires significant effort to construct suitable risk estimates. One must carefully define and measure the targets of prediction to avoid retrenching biases in the data. But, importantly, one cannot generally address these difficulties by requiring that algorithms satisfy popular mathematical formalizations of fairness. By highlighting these challenges in the foundation of fair machine learning, we hope to help researchers and practitioners productively advance the area.
This paper describes a robot system for the automatic pruning of grape vines. A mobile platform straddles the row of vines, and it images them with trinocular stereo cameras as it moves. A computer vision system builds a three-dimensional (3D) model of the vines, an artificial intelligence (AI) system decides which canes to prune, and a six degree-of-freedom robot arm makes the required cuts. The system is demonstrated cutting vines in the vineyard. The main contributions of this paper are the computer vision system that builds 3D vine models, and the test of the complete-integrated system. The vine models capture the structure of the plants so that the AI system can decide where to prune, and they are accurate enough that the robot arm can reach the required cuts. Vine models are reconstructed by matching features between images, triangulating feature matches to give a 3D model, then optimizing the model and the robot's trajectory jointly (incremental bundle adjustment). Trajectories are estimated online at 0.25 m/s, and they have errors below 1% when modeling a 96 m row of 59 vines. Pruning each vine requires the robot arm to cut an average of 8.4 canes. A collision-free trajectory for the arm is planned in intervals of 1.5 s/vine with a rapidly exploring random tree motion planner. The total time to prune one vine is 2 min in field trials, which is similar to human pruners, and it could be greatly reduced with a faster arm. Trials also show that the long chain of interdependent components limits reliability. A commercially feasible pruning robot should stop and prune each vine in turn. C 2016 Wiley Periodicals, Inc.
Outcome tests are a popular method for detecting bias in lending, hiring, and policing decisions. These tests operate by comparing the success rate of decisions across groups. For example, if loans made to minority applicants are observed to be repaid more often than loans made to whites, it suggests that only exceptionally qualified minorities are granted loans, indicating discrimination. Outcome tests, however, are known to suffer from the problem of infra-marginality: even absent discrimination, the repayment rates for minority and white loan recipients might differ if the two groups have different risk distributions. Thus, at least in theory, outcome tests can fail to accurately detect discrimination. We develop a new statistical test of discrimination-the threshold test-that mitigates the problem of infra-marginality by jointly estimating decision thresholds and risk distributions. Applying our test to a dataset of 4.5 million police stops in North Carolina, we find that the problem of infra-marginality is more than a theoretical possibility, and can cause the outcome test to yield misleading results in practice.
Vine pruning is an important part of vineyard management, and pruning is the most expensive task in the vineyard which has not yet been automated. Every year, most new canes must be removed from the vine, and the choice of canes to retain impacts vine yield. To automate the process of vine pruning, a vine pruning robot must make decisions on what canes to remove or to keep, based on a 3D topological model of the structure of the vine. In this paper we present an Artificial Intelligence (AI) system for making these decisions, developed and evaluated using simulated vines. A viticulture expert evaluated our approach by comparing it to pruning decisions made by a pruner with a skill level typical of human pruners. Our system successfully pruned 30 % of vines better than the human and 89 % at least as well. These results demonstrate that the vine pruning problem is solvable using current computing technologies, and that automating the pruning process has the potential to improve vine quality and yield.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.