Faces convey information essential for social interaction. Their importance has prompted suggestions that some facial features may be processed unconsciously. Although some studies have provided empirical support for this idea, it remains unclear whether these findings were due to perceptual processing or to post-perceptual decisional factors. Evidence for unconscious processing of facial features has predominantly come from the Breaking Continuous Flash Suppression (b-CFS) paradigm, which measures the time it takes different stimuli to overcome interocular suppression. For example, previous studies have found that upright faces are reported faster than inverted faces, and direct-gaze faces are reported faster than averted-gaze faces. However, this procedure suffers from important problems: observers can decide how much information they receive before committing to a report, so their detection responses may be influenced by differences in decision criteria and by stimulus identification. Here, we developed a new procedure that uses predefined exposure durations, enabling independent measurement of perceptual sensitivity and decision criteria. We found higher detection sensitivity to both upright and direct-gaze (compared to inverted and averted-gaze) faces, with no effects on decisional factors. For identification, we found both greater sensitivity and more liberal criteria for upright faces. Our findings demonstrate that face orientation and gaze direction influence perceptual sensitivity, indicating that these facial features may be processed unconsciously.
The dialects theory of cross-cultural communication suggests that due to culture-specific characteristics in the expression of emotion, we can recognise own-culture emotional expressions more accurately than other-culture emotional expressions. This effect is suggested to occur due to the nonconvergent social evolution that takes place in different geographical regions. Based on the evolutionary value of own-culture social signals, previous research has suggested that own-culture emotional expressions can be appraised without conscious awareness. The current study tested this hypothesis. We developed, validated, and made open access what is to our knowledge the first labelled, multicultural facial stimuli set, including freely expressed and Facial Action Coding System instructed emotional expressions. We assessed emotional recognition and cultural familiarity responses during brief backward-masked presentations in British participants. We found that emotional recognition and cultural familiarity were higher for own-culture faces. A Bayesian analysis of face-detection and emotional-recognition performance revealed that faces were not processed subliminally. Further analysis of awareness, using hits (correct detection/recognition) and misses (incorrect detection/recognition), showed that face-detection hits were a necessary condition for reporting higher familiarity for own-culture faces. These findings suggest that the own-culture emotional recognition advantage is preserved under conditions of backwards masking and that the appraisal of cultural familiarity involves conscious awareness.
The theory of universal emotions suggests that certain emotions such as fear, anger, disgust, sadness, surprise and happiness can be encountered cross-culturally. These emotions are expressed using specific facial movements that enable human communication. More recently, theoretical and empirical models have been used to propose that universal emotions could be expressed via discretely different facial movements in different cultures due to the non-convergent social evolution that takes place in different geographical areas. This has prompted the consideration that own-culture emotional faces have distinct evolutionary important sociobiological value and can be processed automatically, and without conscious awareness. In this paper, we tested this hypothesis using backward masking. We showed, in two different experiments per country of origin, to participants in Britain, Chile, New Zealand and Singapore, backward masked own and other-culture emotional faces. We assessed detection and recognition performance, and self-reports for emotionality and familiarity. We presented thorough cross-cultural experimental evidence that when using Bayesian assessment of non-parametric receiver operating characteristics and hit-versus-miss detection and recognition response analyses, masked faces showing own cultural dialects of emotion were rated higher for emotionality and familiarity compared to other-culture emotional faces and that this effect involved conscious awareness.
Santiago Ramón y Cajal (1852–1934) did not only contribute to neurobiology and neurohistology. At the end of the 19th century, he published one of the first clinical reports on the employment of hypnotic suggestion to induce analgesia (hypnoanalgesia) in order to relieve pain in childbirth. Today, the clinical application of hypnoanalgesia is considered an effective technique for the treatment of pain in medicine, dentistry, and psychology. However, the knowledge we have today on the neural and cognitive underpinnings of hypnotic suggestion has increased dramatically since Cajal’s times. Here we review the main contributions of Cajal to hypnoanalgesia and the current knowledge we have about hypnoanalgesia from neural and cognitive perspectives.
Understanding faces and their emotional expressions is essential for social interaction. Past studies have prompted suggestions that some facial features may be processed unconsciously. Evidence for such unconscious processing has predominantly come from the Breaking Continuous Flash Suppression (b-CFS) paradigm, which measures the time it takes different stimuli to overcome interocular suppression. For instance, it has been claimed that suppressed fearful expressions are detected faster that neutral expressions. However, in the b-CFS procedure, observers can decide how much information they receive before committing to a report, so their detection responses may be influenced by differences in decision criteria and by stimulus identification. Here, we use a procedure that addresses these problems by using predefined exposure durations and measuring sensitivity and decision criteria for both detection and identification of facial expressions. We found that neither angry nor fearful expressions enjoy higher sensitivity in detection than happy or neutral expressions as they enter awareness. To test whether our procedure was sensitive to face-related effects, we combined our test of emotional expression with a test of face inversion. While upright faces enjoyed higher sensitivity in detection than inverted faces, again emotional expressions did not enjoy higher sensitivity than neutral or happy expressions. Finally, we measured detection thresholds using a staircase procedure but did not find differences between emotional expressions. Our findings cast doubts on past claims about emotional expressions enjoying prioritised access to awareness and call for the development of more stringent procedures.
Mental imagery is the process through which we retrieve and recombine information from our memory to elicit the subjective impression of “seeing with the mind’s eye”. In the social domain, we imagine other individuals while recalling our encounters with them or modelling alternative social interactions in future. Many studies using imaging and neurophysiological techniques have shown several similarities in brain activity between visual imagery and visual perception, and have identified frontoparietal, occipital and temporal neural components of visual imagery. However, the neural connectivity between these regions during visual imagery of socially relevant stimuli has not been studied. Here we used electroencephalography to investigate neural connectivity and its dynamics between frontal, parietal, occipital and temporal electrodes during visual imagery of faces. We found that voluntary visual imagery of faces is associated with long-range phase synchronisation in the gamma frequency range between frontoparietal electrode pairs and between occipitoparietal electrode pairs. In contrast, no effect of imagery was observed in the connectivity between occipitotemporal electrode pairs. Gamma range synchronisation between occipitoparietal electrode pairs predicted subjective ratings of the contour definition of imagined faces. Furthermore, we found that visual imagery of faces is associated with an increase of short-range frontal synchronisation in the theta frequency range, which temporally preceded the long-range increase in the gamma synchronisation. We speculate that the local frontal synchrony in the theta frequency range might be associated with an effortful top-down mnemonic reactivation of faces. In contrast, the long-range connectivity in the gamma frequency range along the fronto-parieto-occipital axis might be related to the endogenous binding and subjective clarity of facial visual features.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.