(2016) 'A leftward bias however you look at it : revisiting the emotional chimeric face task as a tool for measuring emotion lateralization.', Laterality., 21 (4-6). pp. 643-661. Further information on publisher's website: Use policyThe full-text may be used and/or reproduced, and given to third parties in any format or medium, without prior permission or charge, for personal research or study, educational, or not-for-prot purposes provided that:• a full bibliographic reference is made to the original source • a link is made to the metadata record in DRO • the full-text is not changed in any way The full-text must not be sold in any format or medium without the formal permission of the copyright holders.Please consult the full DRO policy for further details. AbstractLeft hemiface biases observed within the Emotional Chimeric Face Task (ECFT) support emotional face perception models whereby all expressions are preferentially processed by the right hemisphere. However, previous research using this task has not considered that the visible midline between hemifaces might engage atypical facial emotion processing strategies in upright or inverted conditions, nor controlled for left visual field (thus right hemispheric)visuospatial attention biases. This study used novel emotional chimeric faces (blended at the midline) to examine laterality biases for all basic emotions. Left hemiface biases were demonstrated across all emotional expressions and were reduced, but not reversed, for inverted faces. The ECFT bias in upright faces was significantly increased in participants with a large attention bias. These results support the theory that left hemiface biases reflect a genuine bias in emotional face processing, and this bias can interact with attention processes similarly localized in the right hemisphere.3
It is widely agreed that hemispheric asymmetries in emotional face perception exist. However, the mechanisms underlying this lateralization are not fully understood. In the present study, we tested whether (a) these asymmetries are driven by the low spatial frequency content of images depicting facial expressions, and (b) whether the effects differed depending on whether the emotional facial expressions were clearly visible or hidden (i.e., embedded in low spatial frequencies). The manipulation sheds light on the contribution of cortical and subcortical routes to emotional processing mechanisms. We prepared both unfiltered (broadband) and 'hybrid' faces. Within the latter, different bands of spatial frequency content from images of two different expressions were combined (i.e., low frequencies from an emotional image combined with high frequencies from a neutral image). We presented these broadband and hybrid images using the free-viewing emotional chimeric faces task (ECFT) in which two images are presented above and below fixation and asked participants to report which of the two mirror reversed images appeared more emotional. As predicted, the results showed that only broadband expressions produced the well-known left visual field/right hemisphere (LVF/RH) bias across all basic emotions. For hybrid images, only happiness revealed a significant LVF/RH bias. These results suggest that low spatial frequency content of emotional facial expressions, which activates the magnocellular pathway in subcortical structures and bypassing cortical visual processing, is not generally sufficient to induce an LVF bias under free-viewing conditions where participants deny explicitly seeing the emotion, suggesting that the LVF bias in ECFT is primarily cortically mediated.
(2017) 'Comparing the e ect of temporal delay on the availability of egocentric and allocentric information in visual search. ', Behavioural brain research., Further information on publisher's website: Use policyThe full-text may be used and/or reproduced, and given to third parties in any format or medium, without prior permission or charge, for personal research or study, educational, or not-for-pro t purposes provided that:• a full bibliographic reference is made to the original source • a link is made to the metadata record in DRO • the full-text is not changed in any way The full-text must not be sold in any format or medium without the formal permission of the copyright holders.Please consult the full DRO policy for further details. AbstractFrames of reference play a central role in perceiving an object's location and reaching to pick that object up. It is thought that the ventral stream, believed to subserve vision for perception, utilises allocentric coding, while the dorsal stream, argued to be responsible for vision for action, primarily uses an egocentric reference frame. We have previously shown that egocentric representations can survive a delay; however, it is possible that in comparison to allocentric information, egocentric information decays more rapidly. Here we directly compare the effect of delay on the availability of egocentric and allocentric representations.We used spatial priming in visual search and repeated the location of the target relative to either a landmark in the search array (allocentric condition) or the observer's body (egocentric condition). Three inter-trial intervals created minimum delays between two consecutive trials of 2, 4, or 8 seconds. In both conditions, search times to primed locations were faster than search times to un-primed locations. In the egocentric condition the effects were driven by a reduction in search times when egocentric information was repeated, an effect that was observed at all three delays. In the allocentric condition while search times did not change when the allocentric information was repeated, search times to un-primed target locations became slower. We conclude that egocentric representations are not as transient as previously thought but instead this information is still available, and can influence behaviour, after lengthy periods of delay. We also discuss the possible origins of the differences between allocentric and egocentric priming effects.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.