2000
DOI: 10.1016/s0038-0121(00)00003-3
|View full text |Cite
|
Sign up to set email alerts
|

Predicting criminal recidivism using neural networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
17
0
3

Year Published

2001
2001
2024
2024

Publication Types

Select...
5
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 31 publications
(20 citation statements)
references
References 27 publications
0
17
0
3
Order By: Relevance
“…These stated sample and measurement-related strengths made it possible to avoid pitfalls to which many of the previous studies have fallen prey and, in turn, provide a stronger test of the incremental improvement of prediction methods. Specifically, we were not faced with the need to collapse categories of offenses or make statistical adjustments to further increase the base rate of offending as done in previous comparisons (Brodzinski et al 1994;Caulkins et al 1996;Palocsay et al 2000;Silver and Chow-Martin 2002;Silver et al 2000). We were also able to validate each of our models while more adequately controlling for shrinkage, which has often been problematic (Banks et al 2004;Brodzinski et al 1994;Monahan et al 2000Monahan et al , 2005Monahan et al , 2006Silver and Chow-Martin 2002;Silver et al 2000;Steadman et al 2000).…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…These stated sample and measurement-related strengths made it possible to avoid pitfalls to which many of the previous studies have fallen prey and, in turn, provide a stronger test of the incremental improvement of prediction methods. Specifically, we were not faced with the need to collapse categories of offenses or make statistical adjustments to further increase the base rate of offending as done in previous comparisons (Brodzinski et al 1994;Caulkins et al 1996;Palocsay et al 2000;Silver and Chow-Martin 2002;Silver et al 2000). We were also able to validate each of our models while more adequately controlling for shrinkage, which has often been problematic (Banks et al 2004;Brodzinski et al 1994;Monahan et al 2000Monahan et al , 2005Monahan et al , 2006Silver and Chow-Martin 2002;Silver et al 2000;Steadman et al 2000).…”
Section: Discussionmentioning
confidence: 99%
“…Two broad types of techniques have been of particular interest: decision trees (i.e., classification tree analysis, as well as random forests), and artificial neural networks (Berk et al 2009;Brodzinski et al 1994;Caulkins et al 1996;Gardner et al 1996;Grann and Langstrom 2007;Monahan et al 2000;Palocsay et al 2000;Silver et al 2000;Stalans et al 2004;Steadman et al 2000;Thomas et al 2005;Tollenaar and van der Heijden 2013). Although additional methods exist, our current interest is focused on examining those techniques that have, at the time of writing this piece, garnered the most attention-decision trees/random forests, and neural networks, and comparing them to the traditional logistic regression.…”
Section: Machine Learning and Predictionmentioning
confidence: 99%
See 1 more Smart Citation
“…Moreover, it usually lacks of the real data for verification, thus making it difficult to apply the findings to the actual policy (Visher & Weisburd, 1998). The second approach deals with the problem of predicting the crime volume at a specific time and place using various statistic models (Brown & Oxford, 2001;Gorr, Olligschlaeger, & Thompson, 2003;Greenberg, 2001;Harries, 2003;Osgood, 2000;Palocsay, Wang, & Brookshire, 2000;Ratcliffe, 2005). utilized the Naïve exponential model to forecast crime one month ahead in Pittsburgh, US.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Neural classifiers have been developed for a variety of applications, such as the prediction of bank and thrift failures [40,41], criminal recidivism [36,42], survival in intensive care units [43] and credit card fraud detection [44]. While the objective of these networks is usually defined as 'classification', their usefulness may be greatly expanded if the interpretation of the outputs as probabilities can be validated.…”
Section: Discussionmentioning
confidence: 99%