2017
DOI: 10.1111/1745-9133.12270
|View full text |Cite
|
Sign up to set email alerts
|

Effects of Automating Recidivism Risk Assessment on Reliability, Predictive Validity, and Return on Investment (ROI)

Abstract: Research Summary The relationship between reliability and validity is an important but often overlooked topic of research on risk assessment tools in the criminal justice system. By using data from the Minnesota Screening Tool Assessing Recidivism Risk (MnSTARR), a risk assessment instrument the Minnesota Department of Corrections (MnDOC) developed and began using in 2013, we evaluated the impact of inter‐rater reliability (IRR) on predictive performance (validity) among offenders released in 2014. After compa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
108
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 41 publications
(108 citation statements)
references
References 49 publications
0
108
0
Order By: Relevance
“…The advent of actuarial risk assessment instruments has given rise to a rich literature demonstrating the superiority of these tools at predicting recidivism over traditional approaches involving professional or clinical judgment (Andrews and Bonta, ; Andrews, Bonta, and Wormith, ; Bonta, Law, and Hanson, ). As discussed by Grant Duwe and Michael Rocque (, this issue), however, the aim of much of this literature has been focused on issues of validity or on the predictive power of these instruments (Gendreau, Goggin, and Smith, ; Gendreau, Little, and Goggin, ; Smith, Cullen, and Latessa, ). There have been fewer efforts to examine inter‐rater reliability or the extent to which raters generate consistent scores across assessments (Desmarais and Singh, ; Rocque and Plummer‐Beale, ) .…”
mentioning
confidence: 99%
See 1 more Smart Citation
“…The advent of actuarial risk assessment instruments has given rise to a rich literature demonstrating the superiority of these tools at predicting recidivism over traditional approaches involving professional or clinical judgment (Andrews and Bonta, ; Andrews, Bonta, and Wormith, ; Bonta, Law, and Hanson, ). As discussed by Grant Duwe and Michael Rocque (, this issue), however, the aim of much of this literature has been focused on issues of validity or on the predictive power of these instruments (Gendreau, Goggin, and Smith, ; Gendreau, Little, and Goggin, ; Smith, Cullen, and Latessa, ). There have been fewer efforts to examine inter‐rater reliability or the extent to which raters generate consistent scores across assessments (Desmarais and Singh, ; Rocque and Plummer‐Beale, ) .…”
mentioning
confidence: 99%
“…There have been fewer efforts to examine inter‐rater reliability or the extent to which raters generate consistent scores across assessments (Desmarais and Singh, ; Rocque and Plummer‐Beale, ) . Moreover, no studies to date have been conducted to examine the relationship between reliability and validity and the capacity of automated machine scoring, where risk instruments are populated through electronic data extraction methods, to minimize reliability problems and enhance prediction (Duwe and Rocque, ). This policy essay will discuss the importance of the study by Duwe and Rocque to the literature on reliability and automated scoring, identify potential areas of future research, and highlight the policy implications inherent in assessing reliability and using automated scoring techniques.…”
mentioning
confidence: 99%
“…Because the methods of scoring and the instruments themselves vary widely, research is needed on the approaches that lead to the most reliable and valid outcomes. As a response to this need, Grant Duwe and Michael Rocque (, this issue) study the relationship between reliability and validity with data from the Minnesota Screening Tool Assessing Recidivism Risk (MnSTARR), a risk assessment instrument the Minnesota Department of Corrections (MnDOC) developed and began using in 2013. By using follow‐up data on offenders released in 2014 and manual MnSTARR assessments scored by MnDOC staff, Duwe and Rocque assess the impact of inter‐rater reliability (IRR) on predictive performance (validity).…”
mentioning
confidence: 99%
“…Duwe and Rocque () find the MnSTARR was scored with a high degree of consistency by MnDOC staff and that intraclass correlation (ICC) values were at high levels, which they attribute to the instrument comprising mostly objective rather than subjective risk factors. Even with high IRR values on the manually scored instruments, they also report that (a) the automated assessments significantly outperformed those that had been scored manually and, as might be expected, (b) that the more inter‐rater disagreement increased, the more predictive performance decreased.…”
mentioning
confidence: 99%
See 1 more Smart Citation