Recently, interest in the neural correlates of self-recognition has grown. Most studies concentrate on self-face recognition. However, there is a lack of convergence as to precise neuroanatomical locations underlying self-face recognition. In addition, recognition of familiar persons from bodies has been relatively neglected. In the present study, cerebral activity while participants performed a task in which they had to indicate the real appearance of themselves and of a gender-matched close colleague among intact and altered pictures of faces and bodies was measured. The right frontal cortex and the insula were found to be the main regions specifically implicated in visual self-recognition compared with visual processing of other highly familiar persons. Moreover, the right anterior insula along with the right anterior cingulate seemed to play a role in the integration of information about oneself independently of the stimulus domain. The processing of self-related pictures was also compared to scrambled versions of these pictures. Results showed that different areas of the occipito-temporal cortex were more or less recruited depending on whether a face or a body was perceived, as it has already been reported by several recent studies. The implication of present findings for a general framework of person identification is discussed.
This paper presents a review of studies that were aimed at determining which brain regions are recruited during visual self-recognition, with a particular focus on self-face recognition. A complex bilateral network, involving frontal, parietal and occipital areas, appears to be associated with self-face recognition, with a particularly high implication of the right hemisphere. Results indicate that it remains difficult to determine which specific cognitive operation is reflected by each recruited brain area, in part due to the variability of used control stimuli and experimental tasks. A synthesis of the interpretations provided by previous studies is presented. The relevance of using self-recognition as an indicator of self-awareness is discussed. We argue that a major aim of future research in the field should be to identify more clearly the cognitive operations induced by the perception of the self-face, and search for dissociations between neural correlates and cognitive components.
Previous studies investigating the ability of high priority stimuli to grab attention reached contradictory outcomes. The present study used eye tracking to examine the effect of the presence of the self-face among other faces in a visual search task in which the face identity was task-irrelevant. We assessed whether the self-face (1) received prioritized selection (2) caused a difficulty to disengage attention, and (3) whether its status as target or distractor had a differential effect. We included another highly familiar face to control whether possible effects were self-face specific or could be explained by high familiarity. We found that the self-face interfered with the search task. This was not due to a prioritized processing but rather to a difficulty to disengage attention. Crucially, this effect seemed due to the self-face's familiarity, as similar results were obtained with the other familiar face, and was modulated by the status of the face since it was stronger for targets than for distractors.
Even though disgust and fear are both negative emotions, they are characterized by different physiology and action tendencies. The aim of this study was to examine whether fear- and disgust-evoking images would produce different attention bias effects, specifically those related to attention (dis)engagement. Participants were asked to identify a target which was briefly presented around a central image cue, which could either be disgusting, frightening, or neutral. The interval between cue onset and target presentation varied within blocks (200, 500, 800, 1100 ms), allowing us to investigate the time course of attention engagement. Accuracy was lower and reaction times were longer when targets quickly (200 ms) followed disgust-evoking images than when they followed neutral- or fear-evoking images. For the other, longer interval conditions no significant image effects were found. These results suggest that emotion-specific attention effects can be found at very early visual processing stages and that only disgust-evoking images, and not fear-evoking ones, keep hold of our attention for longer. We speculate that this increase in early attention allocation is related to the need to perform a more comprehensive risk-assessment of the disgust-evoking images. The outcomes underline not only the importance of examining the time course of emotion induced attention effects but also the need to look beyond the dimensions of valence and arousal.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.