For more than 25 years, implicit measures have shaped research, theorizing, and intervention in psychological science. During this period, the development and deployment of implicit measures have been predicated on a number of theoretical, methodological, and applied assumptions. Yet these assumptions are frequently violated and rarely met. As a result, the merit of research using implicit measures has increasingly been cast into doubt. In this article, we argue that future implicit measures research could benefit from adherence to four guidelines based on a functional approach wherein performance on implicit measures is described and analyzed as behavior emitted under specific conditions and captured in a specific measurement context. We unpack this approach and highlight recent work illustrating both its theoretical and practical value.
The Propositional Evaluation Paradigm (PEP) has recently shown promise as a relational implicit measure (i.e., an implicit measure which can specify how stimuli are related). Whereas the standard PEP measures response times, mousetracking is becoming increasingly popular for quantifying response competition, with distinct advantages beyond response times. Across four preregistered experiments ( N = 737), we interface the utility of the PEP method with the unique benefits of mousetracking by developing a mousetracking PEP (MT-PEP). The MT-PEP very effectively captured group-level beliefs across domains (Experiments 1–4). It produced larger effects (Experiment 3), exhibited superior predictive validity (Experiment 3), and better split-half reliability (Experiments 3–4) than the standard PEP. Both PEPs appear to be intentionally controllable, particularly the MT-PEP (Experiments 3–4). Nevertheless, the MT-PEP shows strong potential in capturing relational information and may be considered implicit in the sense of capturing fast and unaware (but not unintentional) responding.
The Affect Misattribution Procedure has attracted considerable attention and use in psychological science as a measure of evaluations, attitudes, and biases. The AMP’s appeal to researchers is based in large part on the promise that it taps into unintentional and unaware (i.e., implicit) psychological processes. However, past claims about the implicitness of AMP effects may be inaccurate due to a range of methodological, statistical, and conceptual issues. We re-examine a key premise underpinning the AMP’s use (i.e., that AMP effects are driven by the unaware influence of primes on responses). Across five pre-registered experiments (N = 1021) plus meta-analyses, we demonstrate that AMP effects and their predictive validity are primarily driven by a subset of influence-aware trials (within individuals), and high rates of influence-awareness (between individuals). Counterintuitively, an individual’s influence-awareness rate in one AMP predicts their performance in a previously completed AMP, even when the AMPs assess entirely different attitude domains. Taken together, our results suggest that AMP effects are not particularly implicit, are not mediated by misattribution, and furthermore do not represent an equally valid measure of attitudes across individuals. All materials and data available at osf.io/gv7cm.
An increasing body of evidence shows the importance of accommodating relational information within implicit measures of psychological constructs. Whereas relational variants of the Implicit Association Test (IAT) have been proposed in the past, we put forward the Truth Misattribution Procedure (TMP) as a relational variant of the Affect Misattribution Procedure (AMP) that aims to capture implicit beliefs. Across three experiments, we demonstrate that TMP effects are sensitive to the relational information contained within sentence primes, both in the context of causal stimulus relations of a known truth value (e.g., “smoking causes cancer” vs. “smoking prevents cancer”), as well as in the domain of gender stereotypes (e.g., “men are arrogant” vs. “men should be arrogant”). The potential benefits of the TMP are discussed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.