The COVID-19 pandemic has dramatically changed the nature of our social interactions. In order to understand how protective equipment and distancing measures influence the ability to comprehend others’ emotions and, thus, to effectively interact with others, we carried out an online study across the Italian population during the first pandemic peak. Participants were shown static facial expressions (Angry, Happy and Neutral) covered by a sanitary mask or by a scarf. They were asked to evaluate the expressed emotions as well as to assess the degree to which one would adopt physical and social distancing measures for each stimulus. Results demonstrate that, despite the covering of the lower-face, participants correctly recognized the facial expressions of emotions with a polarizing effect on emotional valence ratings found in females. Noticeably, while females’ ratings for physical and social distancing were driven by the emotional content of the stimuli, males were influenced by the “covered” condition. The results also show the impact of the pandemic on anxiety and fear experienced by participants. Taken together, our results offer novel insights on the impact of the COVID-19 pandemic on social interactions, providing a deeper understanding of the way people react to different kinds of protective face covering.
Facial expressions are of major importance in understanding the mental and emotional states of others. So far, most studies on the perception and comprehension of emotions have used isolated facial expressions as stimuli; for example, photographs of actors displaying facial expressions corresponding to one of the so called ‘basic emotions.’ However, our real experience during social interactions is different: facial expressions of emotion are mostly perceived in a wider context, constituted by body language, the surrounding environment, and our beliefs and expectations. Already in the early twentieth century, the Russian filmmaker Lev Kuleshov argued that such context, established by intermediate shots of strong emotional content, could significantly change our interpretation of facial expressions in film. Prior experiments have shown behavioral effects pointing in this direction, but have only used static images as stimuli. Our study used a more ecological design with participants watching film sequences of neutral faces, crosscut with scenes of strong emotional content (evoking happiness or fear, plus neutral stimuli as a baseline condition). The task was to rate the emotion displayed by a target person’s face in terms of valence, arousal, and category. Results clearly demonstrated the presence of a significant effect in terms of both valence and arousal in the fear condition only. Moreover, participants tended to categorize the target person’s neutral facial expression choosing the emotion category congruent with the preceding context. Our results highlight the context-sensitivity of emotions and the importance of studying them under ecologically valid conditions.
cardiac synchrony is a crucial component of shared experiences, considered as an objective measure of emotional processes accompanying empathic interactions. no study has investigated whether cardiac synchrony among people engaged in collective situations links to the individual emotional evaluation of the shared experience. We investigated theatrical live performances as collective experiences evoking strong emotional engagement in the audience. Cross Recurrence Quantification Analysis was applied to obtain the cardiac synchrony of twelve spectators' quartets attending to two live acting performances. this physiological measure was then correlated with spectators' emotional intensity ratings. Results showed an expected increment in synchrony among people belonging to the same quartet during both performances attendance and rest periods. furthermore, participants' cardiac synchrony was found to be correlated with audience's convergence in the explicit emotional evaluation of the performances they attended to. These findings demonstrate that the mere co-presence of other people sharing a common experience is enough for cardiac synchrony to occur spontaneously and that it increases in function of a shared and coherent explicit emotional experience. Collective experiences are ubiquitous aspects of human culture where emotional activity finds its breeding epicenter. Being involved in collective events, like religious rituals, parades or team sports, fosters prosociality 1 and contributes to the creation of emotional bonds among group members 2. Coherently, it has been shown that the presence of others modulates how individuals feel, express and perceive emotions 3. Amongst the various group experiences populating our social life, specific forms of "collective art" can be included. Indeed, even if some forms of art can be described as "individual", in the sense that usually people enjoy them alone, others are commonly appreciated together with other people. This is the case of several performative arts (e.g., theatre, dance, music) in which people share the artistic experience as part of a group. Studies show that people react differently to these forms of art if they enjoy them alone or with others, suggesting that the presence of others plays a role. For example, at the autonomic level, people react more to music when they are alone than when are in a group 4,5. Differently, watching an aggressive movie clip modulates spectator's subsequent behaviors, on the basis of the attitude of the co-spectators 6. Furthermore, how people react to the mere presence of others influences their parasympathetic response to shared filmic experience 7. One interesting component of collective experiences, that recently drew increasing attention, is the spontaneous synchronization among group members. Synchrony is a well-known phenomenon occurring at different levels (e.g., behavioral, physiological and neural), and it is related to the presence of similar reactions among group members within a short period of time. For example, people...
In this study, the neural mechanism subserving the ability to understand people’s emotional and mental states by observing their body language (facial expression, body posture and mimics) was investigated in healthy volunteers. ERPs were recorded in 30 Italian University students while they evaluated 280 pictures of highly ecological displays of emotional body language that were acted out by 8 male and female Italian actors. Pictures were briefly flashed and preceded by short verbal descriptions (e.g., “What a bore!”) that were incongruent half of the time (e.g., a picture of a very attentive and concentrated person shown after the previous example verbal description). ERP data and source reconstruction indicated that the first recognition of incongruent body language occurred 300 ms post-stimulus. swLORETA performed on the N400 identified the strongest generators of this effect in the right rectal gyrus (BA11) of the ventromedial orbitofrontal cortex, the bilateral uncus (limbic system) and the cingulate cortex, the cortical areas devoted to face and body processing (STS, FFA EBA) and the premotor cortex (BA6), which is involved in action understanding. These results indicate that face and body mimics undergo a prioritized processing that is mostly represented in the affective brain and is rapidly compared with verbal information. This process is likely able to regulate social interactions by providing on-line information about the sincerity and trustfulness of others.
In spite of their striking differences with real-life perception, films are perceived and understood without effort. Cognitive film theory attributes this to the system of continuity editing, a system of editing guidelines outlining the effect of different cuts and edits on spectators. A major principle in this framework is the 180°rule, a rule recommendation that, to avoid spectators' attention to the editing, two edited shots of the same event or action should not be filmed from angles differing in a way that expectations of spatial continuity are strongly violated. In the present study, we used high-density EEG to explore the neural underpinnings of this rule. In particular, our analysis shows that cuts and edits in general elicit early ERP component indicating the registration of syntactic violations as known from language, music, and action processing. However, continuity edits and cuts-across the line differ from each other regarding later components likely to be indicating the differences in spatial remapping as well as in the degree of conscious awareness of one's own perception. Interestingly, a time-frequency analysis of the occipital alpha rhythm did not support the hypothesis that such differences in processing routes are mainly linked to visual attention. On the contrary, our study found specific modulations of the central mu rhythm ERD as an indicator of sensorimotor activity, suggesting that sensorimotor networks might play an important role. We think that these findings shed new light on current discussions about the role of attention and embodied perception in film perception and should be considered when explaining spectators' different experience of different kinds of cuts.Correspondence should be sent to Katrin S. Heimann,
Few studies have explored the specificities of contextual modulations of the processing of facial expressions at a neuronal level. This study fills this gap by employing an original paradigm, based on a version of the filmic “Kuleshov effect”. High-density EEG was recorded while participants watched film sequences consisting of three shots: the close-up of a target person’s neutral face (Face_1), the scene that the target person was looking at (happy, fearful, or neutral), and another close-up of the same target person’s neutral face (Face_2). The participants’ task was to rate both valence and arousal, and subsequently to categorize the target person’s emotional state. The results indicate that despite a significant behavioural ‘context’ effect, the electrophysiological indexes still indicate that the face is evaluated as neutral. Specifically, Face_2 elicited a high amplitude N170 when preceded by neutral contexts, and a high amplitude Late Positive Potential (LPP) when preceded by emotional contexts, thus showing sensitivity to the evaluative congruence (N170) and incongruence (LPP) between context and Face_2. The LPP activity was mainly underpinned by brain regions involved in facial expressions and emotion recognition processing. Our results shed new light on temporal and neural correlates of context-sensitivity in the interpretation of facial expressions.
Given ample evidence for shared cortical structures involved in encoding actions, whether or not subsequently executed, a still unsolved problem is the identification of neural mechanisms of motor inhibition, preventing “covert actions” as motor imagery from being performed, in spite of the activation of the motor system. The principal aims of the present study were the evaluation of: 1) the presence in covert actions as motor imagery of putative motor inhibitory mechanisms; 2) their underlying cerebral sources; 3) their differences or similarities with respect to cerebral networks underpinning the inhibition of overt actions during a Go/NoGo task. For these purposes, we performed a high density EEG study evaluating the cerebral microstates and their related sources elicited during two types of Go/NoGo tasks, requiring the execution or withholding of an overt or a covert imagined action, respectively. Our results show for the first time the engagement during motor imagery of key nodes of a putative inhibitory network (including pre-supplementary motor area and right inferior frontal gyrus) partially overlapping with those activated for the inhibition of an overt action during the overt NoGo condition. At the same time, different patterns of temporal recruitment in these shared neural inhibitory substrates are shown, in accord with the intended overt or covert modality of action performance. The evidence that apparently divergent mechanisms such as controlled inhibition of overt actions and contingent automatic inhibition of covert actions do indeed share partially overlapping neural substrates, further challenges the rigid dichotomy between conscious, explicit, flexible and unconscious, implicit, inflexible forms of motor behavioral control.
The temporal dynamics of brain activation during visual and auditory perception of congruent vs. incongruent musical video clips was investigated in 12 musicians from the Milan Conservatory of music and 12 controls. 368 videos of a clarinetist and a violinist playing the same score with their instruments were presented. The sounds were similar in pitch, intensity, rhythm and duration. To produce an audiovisual discrepancy, in half of the trials, the visual information was incongruent with the soundtrack in pitch. ERPs were recorded from 128 sites. Only in musicians for their own instruments was a N400-like negative deflection elicited due to the incongruent audiovisual information. SwLORETA applied to the N400 response identified the areas mediating multimodal motor processing: the prefrontal cortex, the right superior and middle temporal gyrus, the premotor cortex, the inferior frontal and inferior parietal areas, the EBA, somatosensory cortex, cerebellum and SMA. The data indicate the existence of audiomotor mirror neurons responding to incongruent visual and auditory information, thus suggesting that they may encode multimodal representations of musical gestures and sounds. These systems may underlie the ability to learn how to play a musical instrument.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.