Your article is protected by copyright and all rights are held exclusively by Springer Science +Business Media New York. This e-offprint is for personal use only and shall not be selfarchived in electronic repositories. If you wish to self-archive your article, please use the accepted manuscript version for posting on your own website. You may further deposit the accepted manuscript version in any repository, provided it is only made publicly available 12 months after official publication or later and provided acknowledgement is given to the original source of publication and a link is inserted to the published article on Springer's website. The link must be accompanied by the following text: "The final publication is available at link.springer.com".
Prior research using static facial stimuli (photographs) has identified diagnostic face regions (i.e., functional for recognition) of emotional expressions. In the current study, we aimed to determine attentional orienting, engagement, and time course of fixation on diagnostic regions. To this end, we assessed the eye movements of observers inspecting dynamic expressions that changed from a neutral to an emotional face. A new stimulus set (KDEF-dyn) was developed, which comprises 240 video-clips of 40 human models portraying six basic emotions (happy, sad, angry, fearful, disgusted, and surprised). For validation purposes, 72 observers categorized the expressions while gaze behavior was measured (probability of first fixation, entry time, gaze duration, and number of fixations). Specific visual scanpath profiles characterized each emotional expression: The eye region was looked at earlier and longer for angry and sad faces; the mouth region, for happy faces; and the nose/cheek region, for disgusted faces; the eye and the mouth regions attracted attention in a more balanced manner for surprise and fear. These profiles reflected enhanced selective attention to expression-specific diagnostic face regions. The KDEF-dyn stimuli and the validation data will be available to the scientific community as a useful tool for research on emotional facial expression processing.
We investigated the visual attention patterns (i.e., where, when, how frequently, and how long viewers look at each face region) for faces with (a) genuine, enjoyment smiles (i.e., a smiling mouth and happy eyes with the Duchenne marker), (b) fake, nonenjoyment smiles (a smiling mouth but nonhappy eyes: neutral, surprised, fearful, sad, disgusted, or angry), or (c) no smile (and nonhappy eyes). Viewers evaluated whether the faces conveyed happiness ("felt happy") or not, while eye movements were monitored. Results indicated, first, that the smiling mouth captured the first fixation more likely and faster than the eyes, regardless of type of eyes. This reveals similar attentional orienting to genuine and fake smiles. Second, the mouth and, especially, the eyes of faces with fake smiles received more fixations and longer dwell times than those of faces with genuine smiles. This reveals attentional engagement, with a processing cost for fake smiles. Finally, when the mouth of faces with fake smiles was fixated earlier than the eyes, the face was likely to be judged as genuinely happy. This suggests that the first fixation on the smiling mouth biases the viewer to misinterpret the emotional state underlying blended expressions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.