Previous studies have focused on the biases and feedback loops that occur in predictive policing algorithms. These studies show how systemically and institutionally biased data leads to these feedback loops when predictive policing algorithms are applied in real life.We take a step back, and show that the choice in algorithm can be embedded in a specific criminological theory, and that the choice of a model on its own even without biased data can create biased feedback loops. By synthesizing "historical" data, in which we control the relationships between crimes, location and time, we show that the current predictive policing algorithms create biased feedback loops even with completely random data. We then review the process of creation and deployment of these predictive systems, and highlight when good practices, such as fitting a model to data, "go bad" within the context of larger system development and deployment. Using best practices from previous work on assessing and mitigating the impact of new technologies, we highlight where the design of these algorithms has broken down. The study also found that multidisciplinary analysis of such systems is vital for uncovering these issues and shows that any study of equitable AI should involve a systematic and holistic analysis of their design rationalities. CCS Concepts: • Applied computing → Law, social and behavioral sciences; • Software and its engineering → Designing software; • Theory of computation → Models of computation.