Research has demonstrated that implicit and explicit evaluations of the same object can diverge. Explanations of such dissociations frequently appeal to dual-process theories, such that implicit evaluations are assumed to reflect object-valence contingencies independent of their perceived validity, whereas explicit evaluations reflect the perceived validity of object-valence contingencies. Although there is evidence supporting these assumptions, it remains unclear if dissociations can arise in situations in which object-valence contingencies are judged to be true or false during the learning of these contingencies. Challenging dual-process accounts that propose a simultaneous operation of two parallel learning mechanisms, results from three experiments showed that the perceived validity of evaluative information about social targets qualified both explicit and implicit evaluations when validity information was available immediately after the encoding of the valence information; however, delaying the presentation of validity information reduced its qualifying impact for implicit, but not explicit, evaluations.
Experimental paradigms designed to assess "implicit" representations are currently very popular in many areas of psychology. The present article addresses the validity of three widespread assumptions in research using these paradigms: that (a) implicit measures reflect unconscious or introspectively inaccessible representations; (b) the major difference between implicit measures and self-reports is that implicit measures are resistant or less susceptible to social desirability; and (c) implicit measures reflect highly stable, older representations that have their roots in long-term socialization experiences. Drawing on a review of the available evidence, we conclude that the validity of all three assumptions is equivocal and that theoretical interpretations should be adjusted accordingly. We discuss an alternative conceptualization that distinguishes between activation and validation processes.
In this methodological commentary, we use Bem's (2011) recent article reporting experimental evidence for psi as a case study for discussing important deficiencies in modal research practice in empirical psychology. We focus on (a) overemphasis on conceptual rather than close replication, (b) insufficient attention to verifying the soundness of measurement and experimental procedures, and (c) flawed implementation of null hypothesis significance testing. We argue that these deficiencies contribute to weak method-relevant beliefs that, in conjunction with overly strong theory-relevant beliefs, lead to a systemic and pernicious bias in the interpretation of data that favors a researcher's theory. Ultimately, this interpretation bias increases the risk of drawing incorrect conclusions about human psychology. Our analysis points to concrete recommendations for improving research practice in empirical psychology. We recommend (a) a stronger emphasis on close replication, (b) routinely verifying the integrity of measurement instruments and experimental procedures, and (c) using stronger, more diagnostic forms of null hypothesis testing.
Cognitive complexity was measured in terms of dimensionality and articula-tion. How consistent they were between different measuring conditions was examined by correlating their measures with one another obtained from two sets of grids differing in constructs, objects (role persons), and tasks (rating vs. grouping). Measures of dimensionality were the modified Bieri's matching score, Scott's D, and Ware's percent of variance of the first principal component, and those of articulation, Bieri's matching score, Scott's C, and the number of groups. The main findings were as follows. (1) Dimensionality varied quite largely between two conditions differing in elements of grids, while articulation kept some coherence. (2) According to the results of split-half method, alternation of objects in a grid contributed more to fluctuation of dimensionality than of constructs.
There is currently an unprecedented level of doubt regarding the reliability of research findings in psychology. Many recommendations have been made to improve the current situation. In this article, we report results from PsychDisclosure.org, a novel open-science initiative that provides a platform for authors of recently published articles to disclose four methodological design specification details that are not required to be disclosed under current reporting standards but that are critical for accurate interpretation and evaluation of reported findings. Grassroots sentiment-as manifested in the positive and appreciative response to our initiative-indicates that psychologists want to see changes made at the systemic level regarding disclosure of such methodological details. Almost 50% of contacted researchers disclosed the requested design specifications for the four methodological categories (excluded subjects, nonreported conditions and measures, and sample size determination). Disclosed information provided by participating authors also revealed several instances of questionable editorial practices, which need to be thoroughly examined and redressed. On the basis of these results, we argue that the time is now for mandatory methods disclosure statements for all psychology journals, which would be an important step forward in improving the reliability of findings in psychology.
Over the last decade, a new class of indirect measurement procedures has become increasingly popular in many areas of psychology. However, these implicit measures have also sparked controversies about the nature of the constructs they assess. One controversy has been stimulated by the question of whether some implicit measures (or implicit measures in general) assess extra‐personal rather than personal associations. We argue that, despite empirical and methodological advances stimulated by this debate, researchers have not sufficiently addressed the conceptual question of how to define extra‐personal in contrast to personal associations. Based on a review of possible definitions, we argue that some definitions render the controversy obsolete, whereas others imply fundamentally different empirical and methodological questions. As an alternative to defining personal and extra‐personal associations in an objective sense, we suggest an empirical approach that investigates the meta‐cognitive inferences that make a given association subjectively personal or extra‐personal for the individual.
J. De Houwer, S. Teige-Mocigemba, A. Spruyt, and A. Moors's normative analysis of implicit measures provides an excellent clarification of several conceptual ambiguities surrounding the validation and use of implicit measures. The current comment discusses an important, yet unacknowledged, implication of J. De Houwer et al.'s analysis, namely, that investigations addressing the proposed implicitness criterion (i.e., does the relevant psychological attribute influence measurement outcomes in an automatic fashion?) will be susceptible to fundamental misinterpretations if they are conducted independently of the proposed what criterion (i.e., is the measurement outcome causally produced by the psychological attribute the measurement procedure was designed to assess?). As a solution, it is proposed that experimental validation studies should be combined with a correlational approach in order to determine whether a given manipulation influenced measurement scores via variations in the relevant psychological attribute or via secondary sources of systematic variance. (PsycINFO Database Record (c) 2009 APA, all rights reserved).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.