“…For example, some psychological assessment instruments that are widely used in the legal arena have demonstrated field reliability that is considerably below what tends to be reported in controlled research studies and professional manuals (e.g., Edens, Cox, Smith, DeMatteo, & Sörman, 2015;Jeandarme et al, 2016;Miller et al, 2012;Murrie et al, 2009). Other research has raised concerns about predictive validity (e.g., Neal et al, 2015;van Heesch et al, 2016), such as a recent meta-analysis that found that scores on Hare's Psychopathy Checklist-Revised (PCL-R; Hare, 2003) were weaker predictors of sexual recidivism when assigned for real world decision-making (d ϭ .28) than when assigned for research purposes (d ϭ .44;Hawes, Boccaccini, & Murrie, 2013).…”
mentioning
confidence: 99%
“…It has, however, only been in recent years that forensic assessment researchers have turned their focus to field studies, examining the psychometric properties of instruments scores and evaluator opinions from real-world cases. Findings from such studies suggest generally weaker performance in the field than in the lab, although the findings are sometimes mixed, and some measures and procedures appear to be more affected than others (e.g., Boccaccini, Murrie, Caperton, & Hawes, 2009; Gowensmith, Murrie, & Boccaccini, 2013; Miller, Kimonis, Otto, Kline, & Wasserman, 2012; Murrie et al, 2009; Neal, Miller, & Shealy, 2015; van Heesch, Jeandarme, Pouls, & Vervaeke, 2016; Vincent, Guy, Fusco, & Gershenson, 2012). For example, some psychological assessment instruments that are widely used in the legal arena have demonstrated field reliability that is considerably below what tends to be reported in controlled research studies and professional manuals (e.g., Edens, Cox, Smith, DeMatteo, & Sörman, 2015; Jeandarme et al, 2016; Miller et al, 2012; Murrie et al, 2009).…”
The last several decades have seen a major upswing in the development and use of psychological assessment instruments in forensic and correctional settings. At the same time, admissibility standards increasingly have stressed the importance of the reliability and validity of evidence in legal proceedings. Recent research has, however, raised serious concerns about (a) the reliability of forensic science evidence in general, (b) the replicability of psychological research findings in general and in field settings especially, and (c) the interrater reliability and predictive validity of forensic psychological assessment evidence in particular. In this introduction to the special issue of on the field utility of forensic assessment instruments and procedures, we provide an overview of key issues bearing on field studies, focusing on why such research is critically important to improving the quality of the practice of forensic mental health assessments. We also identify various methodological issues and constraints relevant to conducting research outside of controlled settings. We conclude with recommendations for how future field research can improve upon the current state of the discipline in forensic mental health assessment. (PsycINFO Database Record
“…For example, some psychological assessment instruments that are widely used in the legal arena have demonstrated field reliability that is considerably below what tends to be reported in controlled research studies and professional manuals (e.g., Edens, Cox, Smith, DeMatteo, & Sörman, 2015;Jeandarme et al, 2016;Miller et al, 2012;Murrie et al, 2009). Other research has raised concerns about predictive validity (e.g., Neal et al, 2015;van Heesch et al, 2016), such as a recent meta-analysis that found that scores on Hare's Psychopathy Checklist-Revised (PCL-R; Hare, 2003) were weaker predictors of sexual recidivism when assigned for real world decision-making (d ϭ .28) than when assigned for research purposes (d ϭ .44;Hawes, Boccaccini, & Murrie, 2013).…”
mentioning
confidence: 99%
“…It has, however, only been in recent years that forensic assessment researchers have turned their focus to field studies, examining the psychometric properties of instruments scores and evaluator opinions from real-world cases. Findings from such studies suggest generally weaker performance in the field than in the lab, although the findings are sometimes mixed, and some measures and procedures appear to be more affected than others (e.g., Boccaccini, Murrie, Caperton, & Hawes, 2009; Gowensmith, Murrie, & Boccaccini, 2013; Miller, Kimonis, Otto, Kline, & Wasserman, 2012; Murrie et al, 2009; Neal, Miller, & Shealy, 2015; van Heesch, Jeandarme, Pouls, & Vervaeke, 2016; Vincent, Guy, Fusco, & Gershenson, 2012). For example, some psychological assessment instruments that are widely used in the legal arena have demonstrated field reliability that is considerably below what tends to be reported in controlled research studies and professional manuals (e.g., Edens, Cox, Smith, DeMatteo, & Sörman, 2015; Jeandarme et al, 2016; Miller et al, 2012; Murrie et al, 2009).…”
The last several decades have seen a major upswing in the development and use of psychological assessment instruments in forensic and correctional settings. At the same time, admissibility standards increasingly have stressed the importance of the reliability and validity of evidence in legal proceedings. Recent research has, however, raised serious concerns about (a) the reliability of forensic science evidence in general, (b) the replicability of psychological research findings in general and in field settings especially, and (c) the interrater reliability and predictive validity of forensic psychological assessment evidence in particular. In this introduction to the special issue of on the field utility of forensic assessment instruments and procedures, we provide an overview of key issues bearing on field studies, focusing on why such research is critically important to improving the quality of the practice of forensic mental health assessments. We also identify various methodological issues and constraints relevant to conducting research outside of controlled settings. We conclude with recommendations for how future field research can improve upon the current state of the discipline in forensic mental health assessment. (PsycINFO Database Record
“…Based on 47 studies with age information, the median age was 35 years (IQR [33][34][35][36][37][38]. Studies were conducted in 12 countries-Austria, 34 Australia, [35][36][37] Belgium, [38][39][40][41][42] Canada, [43][44][45][46][47][48][49][50][51] Denmark, [52][53][54] Finland, 50,55 Germany, 50,56 Japan, 57 Netherlands, 3,[58][59][60][61][62][63][64][65][66][67] Sweden, 50,[68][69][70][71][72][73][74] the UK,…”
Section: Resultsmentioning
confidence: 99%
“…Based on 47 studies with age information, the median age was 35 years (IQR 33–38). Studies were conducted in 12 countries—Austria, 34 Australia, 35 , 36 , 37 Belgium, 38 , 39 , 40 , 41 , 42 Canada, 43 , 44 , 45 , 46 , 47 , 48 , 49 , 50 , 51 Denmark, 52 , 53 , 54 Finland, 50 , 55 Germany, 50 , 56 Japan, 57 Netherlands, 3 , 58 , 59 , 60 , 61 , 62 , 63 , 64 , 65 , 66 , 67 Sweden, 50 , 68 , 69 , 70 , 71 , 72 , 73 , 74 the UK, 75 , 76 , 77 , 78 , 79 , 80 and the USA 81 , 82 —all high-income economies ( appendix p 4 ). 83 …”
Section: Resultsmentioning
confidence: 99%
“…van Heesch et al (2016) 42 Tengstrom (2001) 71 Gray et al (2007) 75 Grann et al (2000) 68 RE model for subgroup (p=0•47; I 2 =0•00%)…”
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.