People evaluate a stranger’s trustworthiness from their facial features in a fraction of a second, despite common advice “not to judge a book by its cover.” Evaluations of trustworthiness have critical and widespread social impact, predicting financial lending, mate selection, and even criminal justice outcomes. Consequently, understanding how people perceive trustworthiness from faces has been a major focus of scientific inquiry, and detailed models explain how consensus impressions of trustworthiness are driven by facial attributes. However, facial impression models do not consider variation between observers. Here, we develop a sensitive test of trustworthiness evaluation and use it to document substantial, stable individual differences in trustworthiness impressions. Via a twin study, we show that these individual differences are largely shaped by variation in personal experience, rather than genes or shared environments. Finally, using multivariate twin modeling, we show that variation in trustworthiness evaluation is specific, dissociating from other key facial evaluations of dominance and attractiveness. Our finding that variation in facial trustworthiness evaluation is driven mostly by personal experience represents a rare example of a core social perceptual capacity being predominantly shaped by a person’s unique environment. Notably, it stands in sharp contrast to variation in facial recognition ability, which is driven mostly by genes. Our study provides insights into the development of the social brain, offers a different perspective on disagreement in trust in wider society, and motivates new research into the origins and potential malleability of face evaluation, a critical aspect of human social cognition.
There are large, reliable individual differences in the recognition of facial expressions of emotion across the general population. The sources of this variation are not yet known. We investigated the contribution of a key face perception mechanism, adaptive coding, which calibrates perception to optimize discrimination within the current perceptual "diet." We expected that a facial expression system that readily recalibrates might boost sensitivity to variation among facial expressions, thereby enhancing recognition ability. We measured adaptive coding strength with an established facial expression aftereffect task and measured facial expression recognition ability with 3 tasks optimized for the assessment of individual differences. As expected, expression recognition ability was positively associated with the strength of facial expression aftereffects. We also asked whether individual variation in affective factors might contribute to expression recognition ability, given that clinical levels of such traits have previously been linked to ability. Expression recognition ability was negatively associated with self-reported anxiety but not with depression, mood, or degree of autism-like or empathetic traits. Finally, we showed that the perceptual factor of adaptive coding contributes to variation in expression recognition ability independently of affective factors. (PsycINFO Database Record
Children are less skilled than adults at making judgments about facial expression. This could be because they have not yet developed adult-like mechanisms for visually representing faces. Adults are thought to represent faces in a multidimensional face-space, and have been shown to code the expression of a face relative to the norm or average face in face-space. Norm-based coding is economical and adaptive, and may be what makes adults more sensitive to facial expression than children. This study investigated the coding system that children use to represent facial expression. An adaptation aftereffect paradigm was used to test 24 adults and 18 children (9 years 2 months to 9 years 11 months old). Participants adapted to weak and strong antiexpressions. They then judged the expression of an average expression. Adaptation created aftereffects that made the test face look like the expression opposite that of the adaptor. Consistent with the predictions of norm-based but not exemplar-based coding, aftereffects were larger for strong than weak adaptors for both age groups. Results indicate that, like adults, children's coding of facial expressions is norm-based.
Facial expression is theorized to be visually represented in a multidimensional expression space, relative to a norm. This norm-based coding is typically argued to be implemented by a two-pool opponent coding system. However, the evidence supporting the opponent coding of expression cannot rule out the presence of a third channel tuned to the center of each coded dimension. Here we used a paradigm not previously applied to facial expression to determine whether a central-channel model is necessary to explain expression coding. Participants identified expressions taken from a fear/antifear trajectory, first at baseline and then in two adaptation conditions. In one condition, participants adapted to the expression at the center of the trajectory. In the other condition, participants adapted to alternating images from the two ends of the trajectory. The range of expressions that participants perceived as lying at the center of the trajectory narrowed in both conditions, a pattern that is not predicted by the central-channel model but can be explained by the opponent-coding model. Adaptation to the center of the trajectory also increased identification of both fear and antifear, which may indicate a functional benefit for adaptive coding of facial expression.
Appearance-based trustworthiness inferences may reflect the misinterpretation of emotional expression cues. Children and adults typically perceive faces that look happy to be relatively trustworthy and those that look angry to be relatively untrustworthy. Given reports of atypical expression perception in children with Autism Spectrum Disorder (ASD), the current study aimed to determine whether the modulation of trustworthiness judgments by emotional expression cues in children with ASD is also atypical. Cognitively-able children with and without ASD, aged 6–12 years, rated the trustworthiness of faces showing happy, angry and neutral expressions. Trust judgments in children with ASD were significantly modulated by overt happy and angry expressions, like those of typically-developing children. Furthermore, subtle emotion cues in neutral faces also influenced trust ratings of the children in both groups. These findings support a powerful influence of emotion cues on perceived trustworthiness, which even extends to children with social cognitive impairments.
General rightsThis document is made available in accordance with publisher policies. Please cite only the published version using the reference above. Full terms of use are available: http://www.bristol.ac.uk/pure/about/ebr-terms AbstractWe used aftereffects to investigate the coding mechanisms underlying our perception of facial expression. Recent evidence for dimensions that are common to the coding of both expression and identity suggest that the same coding system could be used for both attributes. Identity is adaptively opponent coded by pairs of neural populations tuned to opposite extremes of relevant dimensions. Therefore, we hypothesized that expression would also be opponent coded. An important line of support for opponent coding is that aftereffects increase with adaptor extremity (distance from an average test face) over the full natural range of possible faces. Previous studies have reported that expression aftereffects increase with adaptor extremity. Critically, however, they did not establish the extent of the natural range and so have not ruled out a decrease within that range that could indicate narrowband, multichannel coding. Here we show that expression aftereffects, like identity aftereffects, increase linearly over the full natural range of possible faces and remain high even for impossibly distorted adaptors. These results suggest that facial expression, like face identity, is opponent coded.
Adaptation to facial expressions produces aftereffects that bias perception of subsequent expressions away from the adaptor. Studying the temporal dynamics of an aftereffect can help us to understand the neural processes that underlie perception, and how they change with experience. Little is known about the temporal dynamics of the expression aftereffect. We conducted two experiments to measure the timecourse of this aftereffect. In Experiment 1 we examined how the size of the aftereffect varies with changes in the duration of the adaptor and test stimuli. We found that the expression aftereffect follows the classic timecourse pattern of logarithmic build-up and exponential decay that has been demonstrated for many lower level aftereffects, as well as for facial identity and figural face aftereffects. This classic timecourse pattern suggests that the adaptive calibration mechanisms of facial expression are similar to those of lower level visual stimuli, and is consistent with a perceptual locus for the adaptation aftereffect. We also found that aftereffects could be generated by as little as 1 s of adaptation, and in some conditions lasted for as long as 3200 ms. We extended this last finding in Experiment 2, exploring the longevity of the expression aftereffect by adding a stimulus-free gap of varying duration between adaptation and test. We found that significant expression aftereffects were still present 32 s after adaptation. The persistence of the expression aftereffect suggests that they may have a considerable impact on day-to-day expression perception.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.