People rapidly form impressions from facial appearance, and these impressions affect social decisions. We argue that data-driven, computational models are the best available tools for identifying the source of such impressions. Here we validate seven computational models of social judgments of faces: attractiveness, competence, dominance, extroversion, likability, threat, and trustworthiness. The models manipulate both face shape and reflectance (i.e., cues such as pigmentation and skin smoothness). We show that human judgments track the models' predictions (Experiment 1) and that the models differentiate between different judgments, though this differentiation is constrained by the similarity of the models (Experiment 2). We also make the validated stimuli available for academic research: seven databases containing 25 identities manipulated in the respective model to take on seven different dimension values, ranging from -3 SD to +3 SD (175 stimuli in each database). Finally, we show how the computational models can be used to control for shared variance of the models. For example, even for highly correlated dimensions (e.g., dominance and threat), we can identify cues specific to each dimension and, consequently, generate faces that vary only on these cues.
Correctly perceiving emotions in others is a crucial part of social interactions. We constructed a set of dynamic stimuli to determine the relative contributions of the face and body to the accurate perception of basic emotions. We also manipulated the length of these dynamic stimuli in order to explore how much information is needed to identify emotions. The findings suggest that even a short exposure time of 250 milliseconds provided enough information to correctly identify an emotion above the chance level. Furthermore, we found that recognition patterns from the face alone and the body alone differed as a function of emotion. These findings highlight the role of the body in emotion perception and suggest an advantage for angry bodies, which, in contrast to all other emotions, were comparable to the recognition rates from the face and may be advantageous for perceiving imminent threat from a distance.
We investigate both similarities and differences between dominance and strength judgments using a data-driven approach. First, we created statistical face shape models of judgments of both dominance and physical strength. The resulting faces representing dominance and strength were highly similar, and participants were at chance in discriminating faces generated by the two models. Second, although the models are highly correlated, it is possible to create a model that captures their differences. This model generates faces that vary from dominant-yet-physically weak to nondominant-yet-physically strong. Participants were able to identify the difference in strength between the physically strong-yet-nondominant faces and the physically weak-yet-dominant faces. However, this was not the case for identifying dominance. These results suggest that representations of social dominance and physical strength are highly similar, and that strength is used as a cue for dominance more than dominance is used as a cue for strength.
People often make approachability decisions based on perceived facial trustworthiness. However, it remains unclear how people learn trustworthiness from a population of faces and whether this learning influences their approachability decisions. Here we investigated the neural underpinning of approach behavior and tested two important hypotheses: whether the amygdala adapts to different trustworthiness ranges and whether the amygdala is modulated by task instructions and evaluative goals. We showed that participants adapted to the stimulus range of perceived trustworthiness when making approach decisions and that these decisions were further modulated by the social context. The right amygdala showed both linear response and quadratic response to trustworthiness level, as observed in prior studies. Notably, the amygdala's response to trustworthiness was not modulated by stimulus range or social context, a possible neural dynamic adaptation. Together, our data have revealed a robust behavioral adaptation to different trustworthiness ranges as well as a neural substrate underlying approach behavior based on perceived facial trustworthiness.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.