We systematically evaluated the peer-reviewed Rorschach validity literature for the 65 main variables in the popular Comprehensive System (CS). Across 53 meta-analyses examining variables against externally assessed criteria (e.g., observer ratings, psychiatric diagnosis), the mean validity was r = .27 (k = 770) as compared to r = .08 (k = 386) across 42 meta-analyses examining variables against introspectively assessed criteria (e.g., self-report). Using Hemphill's (2003) data-driven guidelines for interpreting the magnitude of assessment effect sizes with only externally assessed criteria, we found 13 variables had excellent support (r ≥ .33, p < .001; [Symbol: see text] FSN > 50), 17 had good support (r ≥ .21, p < .05, FSN ≥ 10), 10 had modest support (p < .05 and either r ≥ .21, FSN < 10, or r = .15-.20, FSN ≥ 10), 13 had little (p < .05 and either r = < .15 or FSN < 10) or no support (p > .05), and 12 had no construct-relevant validity studies. The variables with the strongest support were largely those that assess cognitive and perceptual processes (e.g., Perceptual-Thinking Index, Synthesized Response); those with the least support tended to be very rare (e.g., Color Projection) or some of the more recently developed scales (e.g., Egocentricity Index, Isolation Index). Our findings are less positive, more nuanced, and more inclusive than those reported in the CS test manual. We discuss study limitations and the implications for research and clinical practice, including the importance of using different methods in order to improve our understanding of people.
Wood, Garb, Nezworski, Lilienfeld, and Duke (2015) found our systematic review and meta-analyses of 65 Rorschach variables to be accurate and unbiased, and hence removed their previous recommendation for a moratorium on the applied use of the Rorschach. However, Wood et al. (2015) hypothesized that publication bias would exist for 4 Rorschach variables. To test this hypothesis, they replicated our meta-analyses for these 4 variables and added unpublished dissertations to the pool of articles. In the process, they used procedures that contradicted their standards and recommendations for sound Rorschach research, which consistently led to significantly lower effect sizes. In reviewing their meta-analyses, we found numerous methodological errors, data errors, and omitted studies. In contrast to their strict requirements for interrater reliability in the Rorschach meta-analyses of other researchers, they did not report interrater reliability for any of their coding and classification decisions. In addition, many of their conclusions were based on a narrative review of individual studies and post hoc analyses rather than their meta-analytic findings. Finally, we challenge their sole use of dissertations to test publication bias because (a) they failed to reconcile their conclusion that publication bias was present with the analyses we conducted showing its absence, and (b) we found numerous problems with dissertation study quality. In short, one cannot rely on the findings or the conclusions reported in Wood et al.
This article documents and discusses the importance of using a formal systematic approach to validating psychological tests. To illustrate, results are presented from a systematic review of the validity findings cited in the Rorschach Comprehensive System (CS; Exner, 2003) test manual, originally conducted during the manuscript review process for Mihura, Meyer, Dumitrascu, and Bombel's (2013) CS meta-analyses. Our review documents (a) the degree to which the CS test manual reports validity findings for each test variable, (b) whether these findings are publicly accessible or unpublished studies coordinated by the test developer, and (c) the presence and nature of data discrepancies between the CS test manual and the cited source. Implications are discussed for the CS in particular, the Rorschach more generally, and psychological tests more broadly. Notably, a history of intensive scrutiny of the Rorschach has resulted in more stringent standards applied to it, even though its scales have more published and supportive construct validity meta-analyses than any other psychological test. Calls are made for (a) a mechanism to correct data errors in the scientific literature, (b) guidelines for test developers' key unpublished studies, and
Recently, psychologists have emphasized the response process-that is, the psychological operations and behaviors that lead to test scores-when designing psychological tests, interpreting their results, and refining their validity. To illustrate the centrality of the response process in construct validity and test interpretation, we provide a historical, conceptual, and empirical review of the main uses of the background white space of the Rorschach cards, called space reversal (SR) and space integration (SI) in the Rorschach Performance Assessment System. We show how SR and SI's unique response processes result in different interpretations, and that reviewing their literatures with these distinct interpretations in mind produces the expected patterns of convergent and discriminant validity. That is, SR was uniquely related to measures of oppositionality; SI was uniquely related to measures of cognitive complexity; and both SR and SI were related to measures of creativity. Our review further suggests that the Comprehensive System use of a single space code for all uses of white space likely led to its lack of meta-analytic support as a measure of oppositionality (Mihura, Meyer, Dumitrascu, & Bombel, 2013 ). We close by discussing the use of the response process to improve test interpretation, develop better measures, and advance the design of research.
The virtue of humility and the construct of differentiation have shown protective influences against narcissism among religious leaders. Our cross-sectional study tested a moderated mediation model of these protective influences on a hypothesized negative narcissismwell-being association, and negative spiritual grandiosity-well-being association. Our sample consisted of clergy candidates (N = 75; M age = 35 years; 70% male; 90% White) receiving psychological assessment services as a part of their vocational training. Our results largely supported the proposed moderated mediation model, with evidence of protective influences for humility and differentiation. The results showed that greater humility lessened the negative influence of narcissism on well-being via differentiation and that greater humility conditioned the direct association such that greater spiritual grandiosity predicted greater well-being. Implications of the findings center on the importance of assessing character strengths and intra-and interpersonal affect regulation capacity, or differentiation, among clergy candidates, and we highlight the need for continued research on client humility.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.