Humans rapidly make inferences about individuals' trustworthiness on the basis of their facial features and perceived group membership. We examine whether incidental learning about trust from shifts in gaze direction is influenced by these facial features. To do so, we examined two types of face category: the race of the face and the initial trustworthiness of the face based on physical appearance. We find that cueing of attention by eye-gaze is unaffected by race or initial levels of trust, whereas incidental learning of trust from gaze behaviour is selectively influenced. That is, learning of trust is reduced for other-race faces, as predicted by reduced abilities to identify members of other races (Experiment 1). In contrast, converging findings from an independently gathered set of data showed that the initial trustworthiness of faces did not influence learning of trust (Experiment 2). These results show that learning about the behaviour of other-race faces is poorer than for own-race faces, but that this cannot be explained by differences in the perceived trustworthiness of different groups.
Eye gaze is a powerful directional cue that automatically evokes joint attention states. Even when faces are ignored, there is incidental learning of the reliability of the gaze cueing of another person, such that people who look away from targets are judged less trustworthy. In a series of experiments, we demonstrated further properties of the incidental learning of trust from gaze direction. First, the emotion of the face, whether neutral or smiling, influenced the pattern of trust learning. Second, the effect was specific to judgments of trust; reliability of gaze direction did not influence other emotional judgments of a person, such as liking. And third, visuomotor fluency was not sufficient for learning of trust, whether or not the face served as a target or as a distractor. Taken together, incidental learning of trust is influenced by facial emotion, it is a specific effect that does not generalize to other emotional assessments, and it is not determined solely by processing fluency. (PsycINFO Database Record
In 8 experiments, we investigated motion fluency effects on object preference. In each experiment, distinct objects were repeatedly seen moving either fluently (with a smooth and predictable motion) or disfluently (with sudden and unpredictable direction changes) in a task where participants were required to respond to occasional brief changes in object appearance. Results show that 1) fluent objects are preferred over disfluent objects when ratings follow a moving presentation, 2) there is some evidence that object-motion associations can be learnt with repeated exposures, 3) sufficiently potent motions can yield preference for fluent objects after a single viewing, and 4) learnt associations do not transfer to situations where ratings follow a stationary presentation, even after deep levels of encoding. Episodic accounts of memory retrieval predict that emotional states experienced at encoding might be retrieved along with the stimulus properties. Though object-motion associations were repeatedly paired, there was no evidence for emotional reinstatement when objects were seen stationary. This indicates that the retrieval process is a critical limiting factor when considering visuomotor fluency effects on behaviour. Such findings have real-world consequences. For example, a product advertised with high perceptual fluency might be preferred at the time, but this preference might not transfer to seeing the object on a shelf.
This study investigates whether mimicry of facial emotions is a stable response or can instead be modulated and influenced by memory of the context in which the emotion was initially observed, and therefore the meaning of the expression. The study manipulated emotion consistency implicitly, where a face expressing smiles or frowns was irrelevant and to be ignored while participants categorised target scenes. Some face identities always expressed emotions consistent with the scene (e.g., smiling with a positive scene), whilst others were always inconsistent (e.g., frowning with a positive scene). During this implicit learning of face identity and emotion consistency there was evidence for encoding of face-scene emotion consistency, with slower RTs, a reduction in trust, and inhibited facial EMG for faces expressing incompatible emotions. However, in a later task where the faces were subsequently viewed expressing emotions with no additional context, there was no evidence for retrieval of prior emotion consistency, as mimicry of emotion was similar for consistent and inconsistent individuals. We conclude that facial mimicry can be influenced by current emotion context, but there is little evidence of learning, as subsequent mimicry of emotionally consistent and inconsistent faces is similar.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.