An important reason to choose an intervention to treat psychological problems of clients is the expectation that the intervention will be effective in alleviating the problems. The authors investigated whether clinicians base their ratings of the effectiveness of interventions on models that they construct representing the factors causing and maintaining a client's problems. Forty clinical child psychologists drew causal models and rank ordered interventions according to their expected effectiveness for 2 cases. The authors found that different clinicians constructed different causal models for the same client. Also, the authors found low to moderate agreement about the effectiveness of different interventions. Nevertheless, the authors could predict clinicians' ratings of effectiveness from their individual causal models.
Background The LIRIK, an instrument for the assessment of child safety and risk, is designed to improve assessments by guiding professionals through a structured evaluation of relevant signs, risk factors, and protective factors.Objective We aimed to assess the interrater agreement and the predictive validity of professionals' judgments made with the LIRIK in comparison to unstructured judgments.
MethodIn study 1, professionals made safety and risk judgments for 12 vignettes with the LIRIK (group 1, n = 36) or without an instrument (group 2, n = 43). In study 2, we compared professionals' safety and risk judgments for 370 children made with the LIRIK (group 1, n = 278) or with no instrument (group 2, n = 92), with outcomes indicating actual unsafety in files 6 months later.
ResultsIn study 1, agreement about safety and risks was poor to moderate in both groups.Differences between groups were small and inconsistent. In study 2, the predictive validity of judgments was weak to moderate in both groups. In neither group had unsafe outcomes increased consistently when unsafety or risks were assessed as higher.Conclusions Judgments made with the LIRIK were not more reliable or valid than unstructured professional judgments. These findings raise important questions about the value of risk assessment instruments and about how professional safety and risk judgments can be improved.
When trying to determine the root cause of an observed effect, people may seek out information with which to test a candidate hypothesis. In two studies, we investigated how knowledge of causal structure influences this information-seeking process. Specifically, we asked whether people would choose to test for pieces of evidence that were far apart or close together in the learned causal structure of a disease category. In parallel with findings showing people's tendency to select diverse evidence in argument testing (López, 1995), our participants tested for evidence distantly located within the causal structure. Simultaneously, they rated the probability of occurrence of such diverse evidence as comparatively low. These findings suggest that rather than seeking out information most likely to confirm the hypothesis, people seek out evidence that they believe will most strongly support the hypothesis if present but that they also believe is relatively unlikely to be present (that is, might disconfirm the hypothesis).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.