Over the last few years, legal scholars, policy-makers, activists and others have generated a vast and rapidly expanding literature concerning the ethical ramifications of using artificial intelligence, machine learning, big data and predictive software in criminal justice contexts. These concerns can be clustered under the headings of fairness, accountability and transparency. First, can we trust technology to be fair, especially given that the data on which the technology is based are biased in various ways? Second, whom can we blame if the technology goes wrong, as it inevitably will on occasion? Finally, does it matter if we do not know how an algorithm works or, relatedly, cannot understand how it reached its decision? I argue that, while these are serious concerns, they are not irresolvable. More importantly, the very same concerns of fairness, accountability and transparency apply, with even greater urgency, to existing modes of decisionmaking in criminal justice. The question, hence, is comparative: can algorithmic modes of decision-making improve upon the status quo in criminal justice? There is unlikely to be a categorical answer to this question, although there are some reasons for cautious optimism.Downloaded from https://www.cambridge.org/core. IP address: 44.224.250.200, on 05 Jul 2020 at 23:09:34, subject to the Cambridge Core terms of use, available at 1 Silvestri and Crowther-Dowey (2008) note that '[t]he overriding consensus within criminology remains that while women do commit a broad range of offences, they commit less crime than men and are less dangerous and violent than their male counterparts' (p. 25).