For more than half a century, emotion researchers have attempted to establish the dimensional space that most economically accounts for similarities and differences in emotional experience. Today, many researchers focus exclusively on two-dimensional models involving valence and arousal. Adopting a theoretically based approach, we show for three languages that four dimensions are needed to satisfactorily represent similarities and differences in the meaning of emotion words. In order of importance, these dimensions are evaluation-pleasantness, potency-control, activation-arousal, and unpredictability. They were identified on the basis of the applicability of 144 features representing the six components of emotions: (a) appraisals of events, (b) psychophysiological changes, (c) motor expressions, (d) action tendencies, (e) subjective experiences, and (f) emotion regulation.
To investigate the perception of emotional facial expressions, researchers rely on shared sets of photos or videos, most often generated by actor portrayals. The drawback of such standardized material is a lack of flexibility and controllability, as it does not allow the systematic parametric manipulation of specific features of facial expressions on the one hand, and of more general properties of the facial identity (age, ethnicity, gender) on the other. To remedy this problem, we developed FACSGen: a novel tool that allows the creation of realistic synthetic 3D facial stimuli, both static and dynamic, based on the Facial Action Coding System. FACSGen provides researchers with total control over facial action units, and corresponding informational cues in 3D synthetic faces. We present four studies validating both the software and the general methodology of systematically generating controlled facial expression patterns for stimulus presentation.
The goal of this study was to examine behavioral and electrophysiological correlates of involuntary orienting toward rapidly presented angry faces in non-anxious, healthy adults using a dot-probe task in conjunction with high-density event-related potentials and a distributed source localization technique. Consistent with previous studies, participants showed hypervigilance toward angry faces, as indexed by facilitated response time for validly cued probes following angry faces and an enhanced P1 component. An opposite pattern was found for happy faces suggesting that attention was directed toward the relatively more threatening stimuli within the visual field (neutral faces). Source localization of the P1 effect for angry faces indicated increased activity within the anterior cingulate cortex, possibly reflecting conflict experienced during invalidly cued trials. No modulation of the early C1 component was found for affect or spatial attention. Furthermore, the face-sensitive N170 was not modulated by emotional expression. Results suggest that the earliest modulation of spatial attention by face stimuli is manifested in the P1 component, and provide insights about mechanisms underlying attentional orienting toward cues of threat and social disapproval. Keywords spatial attention; anger; face perception; event-related potentials; source localization Electrophysiological correlates of spatial orienting towards angry faces: A source localization studyPerception of the human face, as well as the social cues derived from it, is central to social interaction and in the communication of threat (Argyle, 1983), and occurs rapidly, within 100 ms of presentation (e.g., Liu, Harris, & Kanwisher, 2002). For healthy individuals, visual scanpaths of the human face are directed to salient features that define facial emotional expressions such as the mouth and eyes (Walker-Smith, Gale & Findlay, 1977;Mertens, Please address all correspondence to: Diego A. Pizzagalli, Ph.D., Department of Psychology, Harvard University, 1220 William James Hall, 33 Kirkland Street, Cambridge, MA 02138, USA, Phone: +1-617-496-8896, Fax: +1-617-495-3728, Email: dap@wjh.harvard.edu. Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.Disclosures Dr. Pizzagalli has received research support from GlaxoSmithKline and Merck & Co., Inc. for projects unrelated to the present study. Dr. Hofmann is a paid consultant by Organon for issues and projects unrelated to this study. Drs. Santesso and Meuret as well as Mr. Mueller, Ratner, and Roesch report no competing interests. NIH Public Access Author ManuscriptNeurops...
In this article, we present FACSGen 2.0, new animation software for creating static and dynamic three-dimensional facial expressions on the basis of the Facial Action Coding System (FACS). FACSGen permits total control over the action units (AUs), which can be animated at all levels of intensity and applied alone or in combination to an infinite number of faces. In two studies, we tested the validity of the software for the AU appearance defined in the FACS manual and the conveyed emotionality of FACSGen expressions. In Experiment 1, four FACS-certified coders evaluated the complete set of 35 single AUs and 54 AU combinations for AU presence or absence, appearance quality, intensity, and asymmetry. In Experiment 2, lay participants performed a recognition task on emotional expressions created with FACSGen software and rated the similarity of expressions displayed by human and FACSGen faces. Results showed good to excellent classification levels for all AUs by the four FACS coders, suggesting that the AUs are valid exemplars of FACS specifications. Lay participants' recognition rates for nine emotions were high, and comparisons of human and FACSGen expressions were very similar. The findings demonstrate the effectiveness of the software in producing reliable and emotionally valid expressions, and suggest its application in numerous scientific areas, including perception, emotion, and clinical and neuroscience research.
Computer science offers a large set of tools for prototyping, writing, running, testing, validating, sharing and reproducing results; however, computational science lags behind. In the best case, authors may provide their source code as a compressed archive and they may feel confident their research is reproducible. But this is not exactly true. James Buckheit and David Donoho proposed more than two decades ago that an article about computational results is advertising, not scholarship. The actual scholarship is the full software environment, code, and data that produced the result. This implies new workflows, in particular in peer-reviews. Existing journals have been slow to adapt: source codes are rarely requested and are hardly ever actually executed to check that they produce the results advertised in the article. ReScience is a peer-reviewed journal that targets computational research and encourages the explicit replication of already published research, promoting new and open-source implementations in order to ensure that the original research can be replicated from its description. To achieve this goal, the whole publishing chain is radically different from other traditional scientific journals. ReScience resides on GitHub where each new implementation of a computational study is made available together with comments, explanations, and software tests.
In the past, home automation was a small market for technology enthusiasts. Interconnectivity between devices was down to the owner's technical skills and creativity, while security was non-existent or primitive, because cyber threats were also largely non-existent or primitive. This is not the case any more. The adoption of Internet of Things technologies, cloud computing, artificial intelligence and an increasingly wide range of sensing and actuation capabilities has led to smart homes that are more practical, but also genuinely attractive targets for cyber attacks. Here, we classify applicable cyber threats according to a novel taxonomy, focusing not only on the attack vectors that can be used, but also the potential impact on the systems and ultimately on the occupants and their domestic life. Utilising the taxonomy, we classify twenty five different smart home attacks, providing further examples of legitimate, yet vulnerable smart home configurations which can lead to second-order attack vectors. We then review existing smart home defence mechanisms and discuss open research problems. Reference Key security properties Vulnerabilities/challenges Security recommended Open problems identified Komninos et al. [1] Confidentiality Connected to Internet Auto-immunity to threats Resilience Physical tampering Reliability, availability Lin et al. [2] Confidentiality Phys./netw. accessibility Gateway architecture Auto-configuration Authentication Constrained resources Updates Access control Heterogeneity Nawir et al. [6] Smart meter integrity Remote connectivity Techn. countermeasures Standardisation Privacy Physical tampering Regulatory initiatives Impact evaluation, metrics Non-repudiation Malicious actuation Intrusion detection Authorisation Logging for audit/forensics Ziegeldorf et al.[5]
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.