(2016) 'A leftward bias however you look at it : revisiting the emotional chimeric face task as a tool for measuring emotion lateralization.', Laterality., 21 (4-6). pp. 643-661. Further information on publisher's website: Use policyThe full-text may be used and/or reproduced, and given to third parties in any format or medium, without prior permission or charge, for personal research or study, educational, or not-for-prot purposes provided that:• a full bibliographic reference is made to the original source • a link is made to the metadata record in DRO • the full-text is not changed in any way The full-text must not be sold in any format or medium without the formal permission of the copyright holders.Please consult the full DRO policy for further details. AbstractLeft hemiface biases observed within the Emotional Chimeric Face Task (ECFT) support emotional face perception models whereby all expressions are preferentially processed by the right hemisphere. However, previous research using this task has not considered that the visible midline between hemifaces might engage atypical facial emotion processing strategies in upright or inverted conditions, nor controlled for left visual field (thus right hemispheric)visuospatial attention biases. This study used novel emotional chimeric faces (blended at the midline) to examine laterality biases for all basic emotions. Left hemiface biases were demonstrated across all emotional expressions and were reduced, but not reversed, for inverted faces. The ECFT bias in upright faces was significantly increased in participants with a large attention bias. These results support the theory that left hemiface biases reflect a genuine bias in emotional face processing, and this bias can interact with attention processes similarly localized in the right hemisphere.3
It is widely agreed that hemispheric asymmetries in emotional face perception exist. However, the mechanisms underlying this lateralization are not fully understood. In the present study, we tested whether (a) these asymmetries are driven by the low spatial frequency content of images depicting facial expressions, and (b) whether the effects differed depending on whether the emotional facial expressions were clearly visible or hidden (i.e., embedded in low spatial frequencies). The manipulation sheds light on the contribution of cortical and subcortical routes to emotional processing mechanisms. We prepared both unfiltered (broadband) and 'hybrid' faces. Within the latter, different bands of spatial frequency content from images of two different expressions were combined (i.e., low frequencies from an emotional image combined with high frequencies from a neutral image). We presented these broadband and hybrid images using the free-viewing emotional chimeric faces task (ECFT) in which two images are presented above and below fixation and asked participants to report which of the two mirror reversed images appeared more emotional. As predicted, the results showed that only broadband expressions produced the well-known left visual field/right hemisphere (LVF/RH) bias across all basic emotions. For hybrid images, only happiness revealed a significant LVF/RH bias. These results suggest that low spatial frequency content of emotional facial expressions, which activates the magnocellular pathway in subcortical structures and bypassing cortical visual processing, is not generally sufficient to induce an LVF bias under free-viewing conditions where participants deny explicitly seeing the emotion, suggesting that the LVF bias in ECFT is primarily cortically mediated.
Multisensory signals allow faster responses than the unisensory components. While this redundant signals effect (RSE) has been studied widely with diverse signals, no modelling approach explored the RSE systematically across studies. For a comparative analysis, here, we propose three steps: The first quantifies the RSE compared to a simple, parameter-free race model. The second quantifies processing interactions beyond the race mechanism: history effects and so-called violations of Miller’s bound. The third models the RSE on the level of response time distributions using a context-variant race model with two free parameters that account for the interactions. Mimicking the diversity of studies, we tested different audio-visual signals that target the interactions using a 2 × 2 design. We show that the simple race model provides overall a strong prediction of the RSE. Regarding interactions, we found that history effects do not depend on low-level feature repetition. Furthermore, violations of Miller’s bound seem linked to transient signal onsets. Critically, the latter dissociates from the RSE, demonstrating that multisensory interactions and multisensory benefits are not equivalent. Overall, we argue that our approach, as a blueprint, provides both a general framework and the precision needed to understand the RSE when studied across diverse signals and participant groups.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.