Facial expressions that show emotion play an important role in human social interactions. In previous theoretical studies, researchers have suggested that there are universal, prototypical facial expressions specific to basic emotions. However, the results of some empirical studies that tested the production of emotional facial expressions based on particular scenarios only partially supported the theoretical predictions. In addition, all of the previous studies were conducted in Western cultures. We investigated Japanese laypeople ( n = 65) to provide further empirical evidence regarding the production of emotional facial expressions. The participants produced facial expressions for six basic emotions (anger, disgust, fear, happiness, sadness, and surprise) in specific scenarios. Under the baseline condition, the participants imitated photographs of prototypical facial expressions. The produced facial expressions were automatically coded using FaceReader in terms of the intensities of emotions and facial action units. In contrast to the photograph condition, where all target emotions were shown clearly, the scenario condition elicited the target emotions clearly only for happy and surprised expressions. The photograph and scenario conditions yielded different profiles for the intensities of emotions and facial action units associated with all of the facial expressions tested. These results provide partial support for the theory of universal, prototypical facial expressions for basic emotions but suggest the possibility that the theory may need to be modified based on empirical evidence.
Objective: To propose a set of internationally harmonized procedures and methods for assessing neurocognitive functions, smell, taste, mental, and psychosocial health, and other factors in adults formally diagnosed with COVID-19 (confirmed as SARS-CoV-2 + WHO definition). Methods: We formed an international and cross-disciplinary NeuroCOVID Neuropsychology Taskforce in April 2020. Seven criteria were used to guide the selection of the recommendations’ methods and procedures: (i) Relevance to all COVID-19 illness stages and longitudinal study design; (ii) Standard, cross-culturally valid or widely available instruments; (iii) Coverage of both direct and indirect causes of COVID-19-associated neurological and psychiatric symptoms; (iv) Control of factors specifically pertinent to COVID-19 that may affect neuropsychological performance; (v) Flexibility of administration (telehealth, computerized, remote/online, face to face); (vi) Harmonization for facilitating international research; (vii) Ease of translation to clinical practice. Results: The three proposed levels of harmonization include a screening strategy with telehealth option, a medium-size computerized assessment with an online/remote option, and a comprehensive evaluation with flexible administration. The context in which each harmonization level might be used is described. Issues of assessment timelines, guidance for home/remote assessment to support data fidelity and telehealth considerations, cross-cultural adequacy, norms, and impairment definitions are also described. Conclusions: The proposed recommendations provide rationale and methodological guidance for neuropsychological research studies and clinical assessment in adults with COVID-19. We expect that the use of the recommendations will facilitate data harmonization and global research. Research implementing the recommendations will be crucial to determine their acceptability, usability, and validity.
The ability to judge others' emotions is required for the establishment and maintenance of smooth interactions in a community. Several lines of evidence suggest that the attribution of meaning to a face is influenced by the facial actions produced by an observer during the observation of a face. However, empirical studies testing causal relationships between observers' facial actions and emotion judgments have reported mixed findings. This issue was investigated by measuring emotion judgments in terms of valence and arousal dimensions while comparing dynamic vs. static presentations of facial expressions. We presented pictures and videos of facial expressions of anger and happiness. Participants (N = 36) were asked to differentiate between the gender of faces by activating the corrugator supercilii muscle (brow lowering) and zygomaticus major muscle (cheek raising). They were also asked to evaluate the internal states of the stimuli using the affect grid while maintaining the facial action until they finished responding. The cheek raising condition increased the attributed valence scores compared with the brow-lowering condition. This effect of facial actions was observed for static as well as for dynamic facial expressions. These data suggest that facial feedback mechanisms contribute to the judgment of the valence of emotional facial expressions.
Humans modify their facial expressions in order to communicate their internal states and sometimes to mislead observers regarding their true emotional states. Evidence in experimental psychology shows that discriminative facial responses are short and subtle. This suggests that such behavior would be easier to distinguish when captured in high resolution at an increased frame rate. We are proposing SASE-FE, the first dataset of facial expressions that are either congruent or incongruent with underlying emotion states. We show that overall the problem of recognizing whether facial movements are expressions of authentic emotions or not can be successfully addressed by learning spatio-temporal representations of the data. For this purpose, we propose a method that aggregates features along fiducial trajectories in a deeply learnt space. Performance of the proposed model shows that on average it is easier to distinguish among genuine facial expressions of emotion than among unfelt facial expressions of emotion and that certain emotion pairs such as contempt and disgust are more difficult to distinguish than the rest. Furthermore, the proposed methodology improves state of the art results on CK+ and OULU-CASIA datasets for video emotion recognition, and achieves competitive results when classifying facial action units on BP4D datase.
Given the high mortality of the coronavirus disease 2019 (COVID-19), having severe COVID-19 may be a life-threatening event, especially for individuals at high risk of complications. Therefore, in the article we try to answer two questions that are relevant to public mental health: Can we define groups who are at higher risk of developing pandemic-related PTSD? How can health specialists prepare for it? Given the results of previous research on PTSD in epidemic (e.g., SARS) survivors, we suggest that mental health professionals in countries touched by the pandemic should prepare for an increase in the PTSD prevalence, specifically in: individuals who have had severe COVID-19; family members of these patients and of patients who have died; and frontline healthcare workers witnessing COVID-19 patients' sudden deaths, or numerous life-threatening situations. We postulate that these groups at risk should be routinely screened for PTSD in primary medical and pediatric care. Mental health services should prepare for providing therapeutic interventions for individuals with PTSD in the vulnerable groups, and support to their families, especially children.
Dysfunction in the understanding of social signals has been reported in persons with epilepsy, which may partially explain lower levels of life satisfaction in this patient population. Extensive assessment is necessary, particularly when the mesial temporal lobe, responsible for emotion processing, is affected. The authors examined multiple levels of social perception in patients with mesial temporal lobe epilepsy (MTLE), including judgments of point-light motion displays of human communicative interactions (Communicative Interactions Database-5 Alternative Forced Choice format) and theory-of-mind processes evaluated using geometric shapes (Frith-Happé animations [FHA]). This case-control study included MTLE patients with anterior temporal lobectomies (ATL+) (N=19), MTLE patients without lobectomies (ATL-) (N=21), and healthy controls (HCs) (N=20). Both groups of MTLE patients were less efficient in recognizing goal-directed and mentalizing interactions of FHA compared with HC subjects. The ATL+ group attributed emotions to FHA less accurately than HC subjects. Both the ATL- and ATL+ groups classified individual point-light animations more often as communicative than the HC group. ATL+ patients were also less efficient in interpreting point-light animations in terms of individual actions than the HC group. The number of years of epilepsy duration was inversely correlated with recognition of FHA interactions. The mean number of seizures was inversely correlated with the interaction identification in point-light stimuli. Patients with MTLE, irrespective of surgical treatment, present impaired social perception in domains assessed with abstract moving shapes or nonabstract biological motion. This impairment may be the basis of problems faced by patients reporting difficulties in understanding the intentions and feelings of other individuals.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.