Recent studies have examined racial disparities in stop-and-frisk, a widely employed but controversial policing tactic. The statistical evidence, however, has been limited and contradictory. We investigate by analyzing three million stops in New York City over five years, focusing on cases where officers suspected the stopped individual of criminal possession of a weapon (CPW). For each CPW stop, we estimate the ex ante probability that the detained suspect has a weapon. We find that in more than 40% of cases, the likelihood of finding a weapon (typically a knife) was less than 1%, raising concerns that the legal requirement of "reasonable suspicion" was often not met. We further find that blacks and Hispanics were disproportionately stopped in these low hit rate contexts, a phenomenon that we trace to two factors: (1) lower thresholds for stopping individuals-regardless of race-in high-crime, predominately minority areas, particularly public housing; and(2) lower thresholds for stopping minorities relative to similarly situated whites. Finally, we demonstrate that by conducting only the 6% of stops that are statistically most likely to result in weapons seizure, one can both recover the majority of weapons and mitigate racial disparities in who is stopped. We show that this statistically informed stopping strategy can be approximated by simple, easily implemented heuristics with little loss in efficiency. Introduction.Over the last 10 years, New York City residents have been stopped and briefly detained by the police millions of times in an effort to get weapons, drugs and other contraband off the streets. Proponents of this stopquestion-frisk policy (hereafter called "stop-and-frisk") argue that by strictly enforcing weapon and drug possession laws, one indirectly reduces more serious crime, such as murder and armed robbery, in line with the "broken windows" theory of policing [Wilson and Kelling (1982)]. Though it is difficult to rigorously assess this claim, wide adoption of stop-and-frisk by the New York City Police Department (NYPD) in the early 1990s did coincide with a period of substantial decline in crime in the city. Opponents of stop-and-frisk, however, argue that regardless of whether the policy is effective, it violates two constitutional protections. First, they claim individuals are stopped without legal basis, in violation of the Fourth Amendment. Indeed, in nearly 90% of cases, stopped suspects are
From doctors diagnosing patients to judges se ing bail, experts often base their decisions on experience and intuition rather than on statistical models. While understandable, relying on intuition over models has o en been found to result in inferior outcomes. Here we present a new method-select-regress-and-round-for constructing simple rules that perform well for complex decisions. ese rules take the form of a weighted checklist, can be applied mentally, and nonetheless rival the performance of modern machine learning algorithms. Our method for creating these rules is itself simple, and can be carried out by practitioners with basic statistics knowledge. We demonstrate this technique with a detailed case study of judicial decisions to release or detain defendants while they await trial. In this application, as in many policy se ings, the e ects of proposed decision rules cannot be directly observed from historical data: if a rule recommends releasing a defendant that the judge in reality detained, we do not observe what would have happened under the proposed action. We address this key counterfactual estimation problem by drawing on tools from causal inference. We nd that simple rules signi cantly outperform judges and are on par with decisions derived from random forests trained on all available features. Generalizing to 22 varied decision-making domains, we nd this basic result replicates. We conclude with an analytical framework that helps explain why these simple decision rules perform as well as they do.
Summary Judges, doctors and managers are among those decision makers who must often choose a course of action under limited time, with limited knowledge and without the aid of a computer. Because data-driven methods typically outperform unaided judgements, resource-constrained practitioners can benefit from simple, statistically derived rules that can be applied mentally. In this work, we formalize long-standing observations about the efficacy of improper linear models to construct accurate yet easily applied rules. To test the performance of this approach, we conduct a large-scale evaluation in 22 domains and focus in detail on one: judicial decisions to release or detain defendants while they await trial. In these domains, we find that simple rules rival the accuracy of complex prediction models that base decisions on considerably more information. Further, comparing with unaided judicial decisions, we find that simple rules substantially outperform the human experts. To conclude, we present an analytical framework that sheds light on why simple rules perform as well as they do.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.