Our capability of recognizing facial expressions of emotion under different viewing conditions implies the existence of an invariant expression representation. As natural visual signals are often distorted and our perceptual strategy changes with external noise level, it is essential to understand how expression perception is susceptible to face distortion and whether the same facial cues are used to process high- and low-quality face images. We systematically manipulated face image resolution (experiment 1) and blur (experiment 2), and measured participants' expression categorization accuracy, perceived expression intensity and associated gaze patterns. Our analysis revealed a reasonable tolerance to face distortion in expression perception. Reducing image resolution up to 48 × 64 pixels or increasing image blur up to 15 cycles/image had little impact on expression assessment and associated gaze behaviour. Further distortion led to decreased expression categorization accuracy and intensity rating, increased reaction time and fixation duration, and stronger central fixation bias which was not driven by distortion-induced changes in local image saliency. Interestingly, the observed distortion effects were expression-dependent with less deterioration impact on happy and surprise expressions, suggesting this distortion-invariant facial expression perception might be achieved through the categorical model involving a non-linear configural combination of local facial features.
Our visual inputs are often entangled with affective meanings in natural vision, implying the existence of extensive interaction between visual and emotional processing. However, little is known about the neural mechanism underlying such interaction. This exploratory transcranial magnetic stimulation (TMS) study examined the possible involvement of the early visual cortex (EVC, Area V1/V2/V3) in perceiving facial expressions of different emotional valences. Across three experiments, single-pulse TMS was delivered at different time windows (50–150 msec) after a brief 10-msec onset of face images, and participants reported the visibility and perceived emotional valence of faces. Interestingly, earlier TMS at ∼90 msec only reduced the face visibility irrespective of displayed expressions, but later TMS at ∼120 msec selectively disrupted the recognition of negative facial expressions, indicating the involvement of EVC in the processing of negative expressions at a later time window, possibly beyond the initial processing of fed-forward facial structure information. The observed TMS effect was further modulated by individuals' anxiety level. TMS at ∼110–120 msec disrupted the recognition of anger significantly more for those scoring relatively low in trait anxiety than the high scorers, suggesting that cognitive bias influences the processing of facial expressions in EVC. Taken together, it seems that EVC is involved in structural encoding of (at least) negative facial emotional valence, such as fear and anger, possibly under modulation from higher cortical areas.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.