Your article is protected by copyright and all rights are held exclusively by Springer Science +Business Media New York. This e-offprint is for personal use only and shall not be selfarchived in electronic repositories. If you wish to self-archive your article, please use the accepted manuscript version for posting on your own website. You may further deposit the accepted manuscript version in any repository, provided it is only made publicly available 12 months after official publication or later and provided acknowledgement is given to the original source of publication and a link is inserted to the published article on Springer's website. The link must be accompanied by the following text: "The final publication is available at link.springer.com".
Prior research using static facial stimuli (photographs) has identified diagnostic face regions (i.e., functional for recognition) of emotional expressions. In the current study, we aimed to determine attentional orienting, engagement, and time course of fixation on diagnostic regions. To this end, we assessed the eye movements of observers inspecting dynamic expressions that changed from a neutral to an emotional face. A new stimulus set (KDEF-dyn) was developed, which comprises 240 video-clips of 40 human models portraying six basic emotions (happy, sad, angry, fearful, disgusted, and surprised). For validation purposes, 72 observers categorized the expressions while gaze behavior was measured (probability of first fixation, entry time, gaze duration, and number of fixations). Specific visual scanpath profiles characterized each emotional expression: The eye region was looked at earlier and longer for angry and sad faces; the mouth region, for happy faces; and the nose/cheek region, for disgusted faces; the eye and the mouth regions attracted attention in a more balanced manner for surprise and fear. These profiles reflected enhanced selective attention to expression-specific diagnostic face regions. The KDEF-dyn stimuli and the validation data will be available to the scientific community as a useful tool for research on emotional facial expression processing.
We investigated the visual attention patterns (i.e., where, when, how frequently, and how long viewers look at each face region) for faces with (a) genuine, enjoyment smiles (i.e., a smiling mouth and happy eyes with the Duchenne marker), (b) fake, nonenjoyment smiles (a smiling mouth but nonhappy eyes: neutral, surprised, fearful, sad, disgusted, or angry), or (c) no smile (and nonhappy eyes). Viewers evaluated whether the faces conveyed happiness ("felt happy") or not, while eye movements were monitored. Results indicated, first, that the smiling mouth captured the first fixation more likely and faster than the eyes, regardless of type of eyes. This reveals similar attentional orienting to genuine and fake smiles. Second, the mouth and, especially, the eyes of faces with fake smiles received more fixations and longer dwell times than those of faces with genuine smiles. This reveals attentional engagement, with a processing cost for fake smiles. Finally, when the mouth of faces with fake smiles was fixated earlier than the eyes, the face was likely to be judged as genuinely happy. This suggests that the first fixation on the smiling mouth biases the viewer to misinterpret the emotional state underlying blended expressions.
Prior research has shown that the more (or less) attractive a face is judged, the more (or less) trustworthy the person is deemed and that some common neural networks are recruited during facial attractiveness and trustworthiness evaluation. To interpret the relationship between attractiveness and trustworthiness (e.g., whether perception of personal trustworthiness may depend on perception of facial attractiveness), we investigated their relative neural processing time course. An event-related potential (ERP) paradigm was used, with localization of brain sources of the scalp neural activity. Face stimuli with a neutral, angry, happy, or surprised expression were presented in an attractiveness judgment, a trustworthiness judgment, or a control (no explicit social judgment) task. Emotional facial expression processing occurred earlier (N170 and EPN, 150-290 ms post-stimulus onset) than attractiveness and trustworthiness processing (P3b, 400-700 ms). Importantly, right-central ERP (C2, C4, C6) differences reflecting discrimination between "yes" (attractive or trustworthy) and "no" (unattractive or untrustworthy) decisions occurred at least 400 ms earlier for attractiveness than for trustworthiness, in the absence of LRP motor preparation differences. Neural source analysis indicated that facial processing brain networks (e.g., LG, FG, and IPL-extending to pSTS), also right-lateralized, were involved in the discrimination time course differences. This suggests that attractiveness impressions precede and might prime trustworthiness inferences and that the neural time course differences reflect truly facial encoding processes.
Prior research has found a relationship between perceived facial attractiveness and perceived personal trustworthiness. We examined the time course of attractiveness relative to trustworthiness evaluation of emotional and neutral faces. This served to explore whether attractiveness might be used as an easily accessible cue and a quick shortcut for judging trustworthiness. Detection thresholds and judgment latencies as a function of expressive intensity were measured. Significant correlations between attractiveness and trustworthiness consistently held for six emotional expressions at four intensities, and neutral faces. Importantly, perceived attractiveness preceded perceived trustworthiness, with lower detection thresholds and shorter decision latencies. This reveals a time course advantage for attractiveness, and suggests that earlier attractiveness impressions could bias trustworthiness inferences. A heuristic cognitive mechanism is hypothesised to ease processing demands by relying on simple and observable clues (attractiveness) as a substitute for more complex and not easily accessible information (trustworthiness).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.