Modern, algorithmic risk tools are trained in the sense that they inductively seek associations between predictors (e.g., prior record) and an outcome of interest (e.g., a rearrest). There is no model. 3 The associations are used to construct a measure of risk. Risk can be represented as a numerical score, a probability of an outcome, or a particular outcome class. Because criminal justice decisions are typically categorical (e.g., release on parole or not), often built into the algorithmic is machinery to translate a numerical score or probability into an appropriate categorical outcome. If not, some less formal means are required to generate an outcome class to serve as the forecast. A probability of a rearrest, for instance, is not actionable until it is high enough to warrant assignment to the outcome class of, say, "recidivist." Once training is completed, the risk algorithm can be used to forecast outcomes for new cases when their outcomes are unknown. Data for such forecasting must be collected and properly managed, as is done for the training data. The process can be demanding because the data used for forecasting must have the same predictors as the data used for training and be realized in the same fashion. Instructive forecasts might well be obtained, for example, from training data and forecasting data properly realized from the same jurisdiction, in roughly the same time period, and for the same criminal justice setting (e.g., parole decisions). And just as for the training data, there will usually be important data quality concerns. The forecasting data are provided to a trained algorithm that, in turn, produces forecasts for each case. Ideally, these are forecasted outcome classes such as a post-release arrest for a violent crime, a post-release arrest for a nonviolent crime, or no post-release arrest whatsoever. Often, the reliability of those forecasts also can be determined (Berk, 2018b). With the forecasts and (preferably) reliabilities in hand, decisions can be made and actions can be taken. The recent risk literature makes clear that machine learning, algorithmic methods can have demonstrable, superior performance compared with subjective or model-based methods (Berk, 2020; Berk & Bleich, 2014; Berk, Sorenson, & Barnes, 2016). It has been long known that even simple statistical methods produce better accuracy relative to subjective approaches (Dawes, Faust, & Meehl, 1989; Meehl, 1954), and as a mathematical matter, machine learning algorithms can adaptively find far more complex relationships in the data, should they exist, compared with conventional models such as logistic regression (Hastie, Tibshirani, & Friedman, 2009). When the relationships between predictors and an outcome are not complex, algorithmic methods properly applied will perform no worse than conventional models.