Abstract:The tearing effect refers to the relevance of tears as an important visual cue adding meaning to human facial expression. However, little is known about how people process these visual cues and their mediating role in terms of emotion perception and person judgment. We therefore conducted two experiments in which we measured the influence of tears on the identification of sadness and the perceived need for social support at an early perceptional level. In two experiments (1 and 2), participants were exposed to sad and neutral faces. In both experiments, the face stimuli were presented for 50 milliseconds. In experiment 1, tears were digitally added to sad faces in one condition. Participants demonstrated a significant faster recognition of sad faces with tears compared to those without tears. In experiment 2, tears were added to neutral faces as well. Participants had to indicate to what extent the displayed individuals were in need of social support. Study participants reported a greater perceived need for social support to both sad and neutral faces with tears than to those without tears. This study thus demonstrated that emotional tears serve as important visual cues at an early (pre-attentive) level.
General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights.-Users may download and print one copy of any publication from the public portal for the purpose of private study or research-You may not further distribute the material or use it for any profit-making activity or commercial gain-You may freely distribute the URL identifying the publication in the public portal Take down policy If you believe that this document breaches copyright, please contact us providing details, and we will remove access to the work immediately and investigate your claim.
Earlier research has shown that, in a dot-probe experiment, simultaneously presented vocal utterances (one with emotional prosody, one neutral) showed faster responses to probes replacing the location of emotional prosody, indicating a cross-modal modulation of attention (Brosch et al, 2008). We designed a multimodal dot-probe experiment in which: (a) fearful and neutral face pairs were simultaneously accompanied by fearful and neutral paired vocalisations or (b) the fearful and neutral vocalisations without face pictures preceded a visual target probe. A unimodal visual block was run as a control. In addition to the expected visual-only effect, we found spatial attentional bias towards fearful vocalisations followed by a visual probe, replicating the crossmodal modulation of attention shown in the Brosch et al experiment. However, no such effects were found for audiovisual face-voice pairs. This absence of an audiovisual effect with simultaneous face-voice presentation might be the consequence of the double-conflict situation; the fuzziness of two competing stimulus presentations might have ruled out or cancelled potential attentional biases towards fearful auditory and visual information.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.