Perceptual systems face competing requirements: improving signal-to-noise ratios of noisy images, by integration; and maximising sensitivity to change, by differentiation. Both processes occur in human vision, under different circumstances: they have been termed priming, or serial dependencies, leading to positive sequential effects; and adaptation or habituation, which leads to negative sequential effects. We reasoned that for stable attributes, such as the identity and gender of faces, the system should integrate: while for changeable attributes like facial expression, it should also engage contrast mechanisms to maximise sensitivity to change. Subjects viewed a sequence of images varying simultaneously in gender and expression, and scored each as male or female, and happy or sad. We found strong and consistent positive serial dependencies for gender, and negative dependency for expression, showing that both processes can operate at the same time, on the same stimuli, depending on the attribute being judged. The results point to highly sophisticated mechanisms for optimizing use of past information, either by integration or differentiation, depending on the permanence of that attribute.
The human brain is specialized for face processing, yet we sometimes perceive illusory faces in objects. It is unknown whether these natural errors of face detection originate from a rapid process based on visual features or from a slower, cognitive re-interpretation. Here we use a multifaceted approach to understand both the spatial distribution and temporal dynamics of illusory face representation in the brain by combining functional magnetic resonance imaging and magnetoencephalography neuroimaging data with model-based analysis. We find that the representation of illusory faces is confined to occipital-temporal face-selective visual cortex. The temporal dynamics reveal a striking evolution in how illusory faces are represented relative to human faces and matched objects. Illusory faces are initially represented more similarly to real faces than matched objects are, but within ~250 ms, the representation transforms, and they become equivalent to ordinary objects. This is consistent with the initial recruitment of a broadly-tuned face detection mechanism which privileges sensitivity over selectivity.
Millions of people use online dating sites each day, scanning through streams of face images in search of an attractive mate. Face images, like most visual stimuli, undergo processes whereby the current percept is altered by exposure to previous visual input. Recent studies using rapid sequences of faces have found that perception of face identity is biased towards recently seen faces, promoting identity-invariance over time, and this has been extended to perceived face attractiveness. In this paper we adapt the rapid sequence task to ask a question about mate selection pertinent in the digital age. We designed a binary task mimicking the selection interface currently popular in online dating websites in which observers typically make binary decisions (attractive or unattractive) about each face in a sequence of unfamiliar faces. Our findings show that binary attractiveness decisions are not independent: we are more likely to rate a face as attractive when the preceding face was attractive than when it was unattractive.
SUMMARY Face perception in humans and non-human primates is rapid and accurate[1–4]. In the human brain, a network of visual processing regions is specialized for faces[5–7]. Although face processing is a priority of the primate visual system, face detection is not infallible. Face pareidolia is the compelling illusion of perceiving facial features on inanimate objects, such as the illusory face on the surface of the moon. Although face pareidolia is commonly experienced by humans, its presence in other species is unknown. Here we provide evidence for face pareidolia in a species known to possess a complex face processing system[8–10]: the rhesus monkey (Macaca mulatta). In a visual preference task[11, 12], monkeys looked longer at photographs of objects that elicited face pareidolia in human observers than at photographs of similar objects that did not elicit illusory faces. Examination of eye movements revealed that monkeys fixated the illusory internal facial features in a pattern consistent with how they view photographs of faces[13]. Although the specialized response to faces observed in humans[1, 3, 5–7, 14] is often argued to be continuous across primates[4, 15], it was previously unclear whether face pareidolia arose from a uniquely human capacity. For example, pareidolia could be a product of the human aptitude for perceptual abstraction, or result from frequent exposure to cartoons and illustrations that anthropomorphize inanimate objects. Instead, our results indicate that the perception of illusory facial features on inanimate objects is driven by a broadly-tuned face detection mechanism that we share with other species.
A large body of research supports the hypothesis that the human visual system does not process a face as a collection of separable facial features but as an integrated perceptual whole. One common assumption is that we quickly build holistic representations to extract useful second-order information provided by the variation between the faces of different individuals. An alternative account suggests holistic processing is a fast, early grouping process that first serves to distinguish faces from other competing objects. From this perspective, holistic processing is a quick initial response to the first-order information present in every face. To test this hypothesis we developed a novel paradigm for measuring the face inversion effect, a standard marker of holistic face processing, that measures the minimum exposure time required to discriminate between two stimuli. These new data demonstrate that holistic processing operates on whole upright faces, regardless of whether subjects are required to extract first- or second-level information. In light of this, we argue that holistic processing is a general mechanism that may occur at an earlier stage of face perception than individual discrimination to support the rapid detection of face stimuli in everyday visual scenes.
In free-viewing experiments, primates orient preferentially toward faces and face-like stimuli. To investigate the neural basis of this behavior, we measured the spontaneous viewing preferences of monkeys with selective bilateral amygdala lesions. The results revealed that when faces and nonface objects were presented simultaneously, monkeys with amygdala lesions had no viewing preference for either conspecific faces or illusory facial features in everyday objects. Instead of directing eye movements toward socially relevant features in natural images, we found that, after amygdala loss, monkeys are biased toward features with increased low-level salience. We conclude that the amygdala has a role in our earliest specialized response to faces, a behavior thought to be a precursor for efficient social communication and essential for the development of face-selective cortex.
It is widely believed that face processing in the primate brain occurs in a network of category-selective cortical regions. Combined functional MRI (fMRI)-single-cell recording studies in macaques have identified high concentrations of neurons that respond more to faces than objects within face-selective patches. However, cells with a preference for faces over objects are also found scattered throughout inferior temporal (IT) cortex, raising the question whether face-selective cells inside and outside of the face patches differ functionally. Here, we compare the properties of face-selective cells inside and outside of face-selective patches in the IT cortex by means of an image manipulation that reliably disrupts behavior toward face processing: inversion. We recorded IT neurons from two fMRI-defined face-patches (ML and AL) and a region outside of the face patches (herein labeled OUT) during upright and inverted face stimulation. Overall, turning faces upside down reduced the firing rate of face-selective cells. However, there were differences among the recording regions. First, the reduced neuronal response for inverted faces was independent of stimulus position, relative to fixation, in the face-selective patches (ML and AL) only. Additionally, the effect of inversion for face-selective cells in ML, but not those in AL or OUT, was impervious to whether the neurons were initially searched for using upright or inverted stimuli. Collectively, these results show that face-selective cells differ in their functional characteristics depending on their anatomicofunctional location, suggesting that upright faces are preferably coded by face-selective cells inside but not outside of the fMRI-defined face-selective regions of the posterior IT cortex.
Almost all previous studies of face recognition have found that matching the same face depicted from different viewpoints incurs both reaction time and accuracy costs. This has been interpreted as evidence that the underlying neural representations of faces are viewpoint-specific, but such a conclusion depends on the experimental data being an accurate reflection of real-world viewpoint generalisation. An equally plausible explanation for poor viewpoint generalisation in experimental situations is that important information that is normally used to generalize across views in real-world settings is not available in the experiment. Stereoscopic information about the three-dimensional structure of the face is systematically misleading in nearly all previous investigations of face recognition, since a face depicted on a computer monitor contains explicit stereoscopic information that the face is flat. The current experiment demonstrates that viewpoint costs are reduced by depicting the face with stereoscopic three-dimensionality (compared to a synoptically presented face), raising the possibility that the viewpoint costs found in face recognition experiments might be a better reflection of the information that is typically unavailable in the experimental stimuli than of the underlying neural representation of facial identity.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.