Unlike frozen snapshots of facial expressions that we often see in photographs, natural facial expressions are dynamic events that unfold in a particular fashion over time. But how important are the temporal properties of expressions for our ability to reliably extract information about a person's emotional state? We addressed this question experimentally by gauging human performance in recognizing facial expressions with varying temporal properties relative to that of a statistically optimal ("ideal") observer. We found that people recognized emotions just as efficiently when viewing them as naturally evolving dynamic events, temporally reversed events, temporally randomized events, or single images frozen in time. Our results suggest that the dynamic properties of human facial movements may play a surprisingly small role in people's ability to infer the emotional states of others from their facial expressions.
Why do faces become easier to recognize with repeated exposure? Previous research has suggested that familiarity may induce a qualitative shift in visual processing from an independent analysis of individual facial features to an analysis that includes information about the relationships amongst features (Farah, Wilson, Drain, & Tanaka, 1998; Maurer, Grand, & Mondloch, 2002). We tested this idea by using a ‘summation-at-threshold’ technique (Gold, Mundy, & Tjan, 2012; Nandy & Tjan, 2008), in which an observer's ability to recognize each individual facial feature shown independently is used to predict their ability to recognize all of the features shown in combination. We find that, although people are better overall at recognizing familiar than unfamiliar faces, their ability to integrate information across features is similar for unfamiliar and highly familiar faces and is well predicted by their ability to recognize each of the facial features shown in isolation. These results are consistent with the idea that familiarity has a quantitative effect on the efficiency with which information is extracted from individual features, rather than qualitative effect on the process by which features are combined.
The impact of context on perception has been well documented for over a century. In some cases, the introduction of context to a set of target features may produce a unified percept, leading to a quicker and more accurate classification; a configural superiority effect (Pomerantz, Sager, & Stoever, 1977). Although this effect has been well characterized in terms of the stimulus features that produce the effect, the specific impact context has on the spatial strategies adopted by observers when making perceptual judgments remains unclear. Here, we sought to address this question by using the methods of response classification and ideal observer analysis. In our main experiment, we used a stimulus set known to produce the configural superiority effect and found that although observers were faster in the presence of context, they were actually less efficient at extracting stimulus information. This surprising result was attributable to the use of a spatial strategy in which observers relied on redundant, noninformative features in the presence of context. A control experiment ruled out the possibility that the mere presence of added context led to these strategic shifts. Our results support previous notions about the nature of the perceptual shifts that are induced by the configural superiority effect. However, they also show that configural processing is more nuanced than originally thought: Although observers may be faster at making judgments when context induces the percept of a configural whole, there appears to be a hidden cost in terms of the efficiency with which information is used. (PsycINFO Database Record
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.