The social brain hypothesis proposes that large neocortex size in Homonoids evolved to cope with the increasing demands of complex group living and greater numbers of interindividual relationships. Group living requires that individuals communicate effectively about environmental and internal events. Recent data have highlighted the complexity of chimpanzee communication, including graded facial expressions and referential vocalizations. Among Hominoids, elaborate facial communication is accompanied by specializations in brain areas controlling facial movement. Finally, the evolution of empathy, or emotional awareness, might have a neural basis in specialized cells in the neocortex, that is, spindle cells that have been associated with self-conscious emotions, and mirror neurons that have recently been shown to activate in response to communicative facial gestures.
Categorical perception (CP) refers to how similar things look different depending on whether they are classified as the same category. Many studies demonstrate that adult humans show CP for human emotional faces. It is widely debated whether the effect can be accounted for solely by perceptual differences (structural differences among emotional faces) or whether additional perceiver-based conceptual knowledge is required. In this review, I discuss the phenomenon of CP and key studies showing CP for emotional faces. I then discuss a new model of emotion which highlights how perceptual and conceptual knowledge interact to explain how people see discrete emotions in others’ faces. In doing so, I discuss how language (emotion words included in the paradigm) contribute to CP.
Do English-speakers think about anger as “red” and sadness as “blue”? Some theories of emotion suggests that color(s)—like other biologically-derived signals- should be reliably paired with an emotion, and that colors should differentiate across emotions. We assessed consistency and specificity for color-emotion pairings among English-speaking adults. In study 1, participants ( n = 73) completed an online survey in which they could select up to three colors from 23 colored swatches (varying hue, saturation, and light) for each of ten emotion words. In study 2, different participants ( n = 52) completed a similar online survey except that we added additional emotions and colors (which better sampled color space). Participants in both studies indicated the strength of the relationship between a selected color(s) and the emotion. In study 1, four of the ten emotions showed consistency, and about one-third of the colors showed specificity, yet agreement was low-to-moderate among raters even in these cases. When we resampled our data, however, none of these effects were likely to replicate with statistical confidence. In study 2, only two of 20 emotions showed consistency, and three colors showed specificity. As with the first study, no color-emotion pairings were both specific and consistent. In addition, in study 2, we found that saturation and lightness, and to a lesser extent hue, predicted color-emotion agreement rather than perceived color. The results suggest that previous studies which report emotion-color pairings are likely best thought of experiment-specific. The results are discussed with respect to constructionist theories of emotion.
Categorical perception (CP) occurs when items in a series of continuously varying stimuli are perceived as belonging to discrete categories. Thereby, perceivers are more accurate at discriminating between stimuli of different categories than between stimuli within the same category (Harnad, 1987; Goldstone, 1994). The current experiments investigated whether the structural information in the face is sufficient for CP to occur. Alternatively, a perceiver’s conceptual knowledge, by virtue of expertise or verbal labeling, might contribute. In two experiments, people who differed in their conceptual knowledge (in the form of expertise, Experiment 1, or verbal label learning, Experiment 2) categorized chimpanzee facial expressions. Expertise alone did result in enhanced CP. Only when perceivers were first trained to associate faces with a label were they more likely to show CP. Overall, the results suggest that the structural information in the face alone is often insufficient for CP; CP is enhanced by verbal labeling.
It is long hypothesized that there is a reliable, specific mapping between certain emotional states and the facial movements that express those states. This hypothesis is often tested by asking untrained participants to pose the facial movements they believe they use to express emotions during generic scenarios. Here, we test this hypothesis using, as stimuli, photographs of facial configurations posed by professional actors in response to contextually-rich scenarios. The scenarios portrayed in the photographs were rated by a convenience sample of participants for the extent to which they evoked an instance of 13 emotion categories, and actors’ facial poses were coded for their specific movements. Both unsupervised and supervised machine learning find that in these photographs, the actors portrayed emotional states with variable facial configurations; instances of only three emotion categories (fear, happiness, and surprise) were portrayed with moderate reliability and specificity. The photographs were separately rated by another sample of participants for the extent to which they portrayed an instance of the 13 emotion categories; they were rated when presented alone and when presented with their associated scenarios, revealing that emotion inferences by participants also vary in a context-sensitive manner. Together, these findings suggest that facial movements and perceptions of emotion vary by situation and transcend stereotypes of emotional expressions. Future research may build on these findings by incorporating dynamic stimuli rather than photographs and studying a broader range of cultural contexts.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.