Untrustworthy faces incur negative judgments across numerous domains. Existing work in this area has focused on situations in which the target's trustworthiness is relevant to the judgment (e.g., criminal verdicts and economic games). Yet in the present studies, we found that people also overgeneralized trustworthiness in criminal-sentencing decisions when trustworthiness should not be judicially relevant, and they did so even for the most extreme sentencing decision: condemning someone to death. In Study 1, we found that perceptions of untrustworthiness predicted death sentences (vs. life sentences) for convicted murderers in Florida (N = 742). Moreover, in Study 2, we found that the link between trustworthiness and the death sentence occurred even when participants viewed innocent people who had been exonerated after originally being sentenced to death. These results highlight the power of facial appearance to prejudice perceivers and affect life outcomes even to the point of execution, which suggests an alarming bias in the criminal-justice system.
Many Labs 3 is a crowdsourced project that systematically evaluated time-of-semester effects across many participant pools. See the Wiki for a table of contents of files and to download the manuscript.
The university participant pool is a key resource for behavioral research, and data quality is believed to vary over the course of the academic semester. This crowdsourced project examined time of semester variation in 10 known effects, 10 individual differences, and 3 data quality indicators over the course of the academic semester in 20 participant pools (N = 2,696) and with an online sample (N = 737). Weak time of semester effects were observed on data quality indicators, participant sex, and a few individual differences-conscientiousness, mood, and stress. However, there was little evidence for time of semester qualifying experimental or correlational effects. The generality of this evidence is unknown because only a subset of the tested effects demonstrated evidence for the original result in the whole sample. Mean characteristics of pool samples change slightly during the semester, but these data suggest that those changes are mostly irrelevant for detecting effects. Word count = 151Keywords: social psychology; cognitive psychology; replication; participant pool; individual differences; sampling effects; situational effects 4 Many Labs 3: Evaluating participant pool quality across the academic semester via replication University participant pools provide access to participants for a great deal of published behavioral research. The typical participant pool consists of undergraduates enrolled in introductory psychology courses that require students to complete some number of experiments over the course of the academic semester. Common variations might include using other courses to recruit participants or making study participation an option for extra credit rather than a pedagogical requirement. Research-intensive universities often have a highly organized participant pool with a participant management system for signing up for studies and assigning credit. Smaller or teaching-oriented institutions often have more informal participant pools that are organized ad hoc each semester or for an individual class.To avoid selection bias based on study content, most participant pools have procedures to avoid disclosing the content or purpose of individual studies during the sign-up process.However, students are usually free to choose the time during the semester that they sign up to complete the studies. This may introduce a selection bias in which data collection on different dates occurs with different kinds of participants, or in different situational circumstances (e.g., the carefree semester beginning versus the exam-stressed semester end).If participant characteristics differ across time during the academic semester, then the results of studies may be moderated by the time at which data collection occurs. Indeed, among behavioral researchers there are widespread intuitions, superstitions, and anecdotes about the "best" time to collect data in order to minimize error and maximize power. It is common, for example, to hear stories of an effect being obtained in the first part of the semester that then "d...
Concerns have been growing about the veracity of psychological research. Many findings in psychological science are based on studies with insufficient statistical power and nonrepresentative samples, or may otherwise be limited to specific, ungeneralizable settings or populations. Crowdsourced research, a type of large-scale collaboration in which one or more research projects are conducted across multiple lab sites, offers a pragmatic solution to these and other current methodological challenges. The Psychological Science Accelerator (PSA) is a distributed network of laboratories designed to enable and support crowdsourced research projects. These projects can focus on novel research questions, or attempt to replicate prior research, in large, diverse samples. The PSA’s mission is to accelerate the accumulation of reliable and generalizable evidence in psychological science. Here, we describe the background, structure, principles, procedures, benefits, and challenges of the PSA. In contrast to other crowdsourced research networks, the PSA is ongoing (as opposed to time-limited), efficient (in terms of re-using structures and principles for different projects), decentralized, diverse (in terms of participants and researchers), and inclusive (of proposals, contributions, and other relevant input from anyone inside or outside of the network). The PSA and other approaches to crowdsourced psychological science will advance our understanding of mental processes and behaviors by enabling rigorous research and systematically examining its generalizability.
Black men tend to be stereotyped as threatening and, as a result, may be disproportionately targeted by police even when unarmed. Here, we found evidence that biased perceptions of young Black men's physical size may play a role in this process. The results of 7 studies showed that people have a bias to perceive young Black men as bigger (taller, heavier, more muscular) and more physically threatening (stronger, more capable of harm) than young White men. Both bottom-up cues of racial prototypicality and top-down information about race supported these misperceptions. Furthermore, this racial bias persisted even among a target sample from whom upper-body strength was controlled (suggesting that racial differences in formidability judgments are a product of bias rather than accuracy). Biased formidability judgments in turn promoted participants' justifications of hypothetical use of force against Black suspects of crime. Thus, perceivers appear to integrate multiple pieces of information to ultimately conclude that young Black men are more physically threatening than young White men, believing that they must therefore be controlled using more aggressive measures. (PsycINFO Database Record
Over the last ten years, Oosterhof and Todorov's valence-dominance model has emerged as the most prominent account of how people evaluate faces on social dimensions. In this model, two dimensions (valence and dominance) underpin social judgments of faces. Because this model has primarily been developed and tested in Western regions, it is unclear whether these findings apply to other regions. We addressed this question by replicating Oosterhof and Todorov's methodology across 11 world regions, 41 countries, and 11,570 participants. When we used Oosterhof and Todorov's original analysis strategy, the valence-dominance model generalized across regions. When we used an alternative methodology to allow for correlated dimensions we observed much less generalization. Collectively, these results suggest that, while the valence-dominance model generalizes very well across regions when dimensions are forced to be orthogonal, regional differences are revealed when we use different extraction methods, correlate and rotate the dimension reduction solution.
Concerns have been growing about the veracity of psychological research. Many findings in psychological science are based on studies with insufficient statistical power and nonrepresentative samples, or may otherwise be limited to specific, ungeneralizable settings or populations. Crowdsourced research, a type of large-scale collaboration in which one or more research projects are conducted across multiple lab sites, offers a pragmatic solution to these and other current methodological challenges. The Psychological Science Accelerator (PSA) is a distributed network of laboratories designed to enable and support crowdsourced research projects. These projects can focus on novel research questions, or attempt to replicate prior research, in large, diverse samples. The PSA’s mission is to accelerate the accumulation of reliable and generalizable evidence in psychological science. Here, we describe the background, structure, principles, procedures, benefits, and challenges of the PSA. In contrast to other crowdsourced research networks, the PSA is ongoing (as opposed to time-limited), efficient (in terms of re-using structures and principles for different projects), decentralized, diverse (in terms of participants and researchers), and inclusive (of proposals, contributions, and other relevant input from anyone inside or outside of the network). The PSA and other approaches to crowdsourced psychological science will advance our understanding of mental processes and behaviors by enabling rigorous research and systematically examining its generalizability.
Across three studies, we test the hypothesis that the perceived “humanness” of a human face can have its roots, in part, in low-level, feature-integration processes typical of normal face perception—configural face processing. We provide novel evidence that perceptions of humanness/dehumanization can have perceptual roots. Relying on the well-established face inversion paradigm, we demonstrate that disruptions of configural face processing also disrupt the ability of human faces to activate concepts related to humanness (Experiment 1), disrupt categorization of human faces as human (but not animal faces as animals; Experiment 2), and reduce the levels of humanlike traits and characteristics ascribed to faces (Experiment 3). Taken together, the current findings provide a novel demonstration that dehumanized responses can arise from bottom-up perceptual cues, which suggests novel causes and consequences of dehumanizing responses.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.