When distinguishing whether a face displays a certain emotion, some regions of the face may contain more useful information than others. Here we ask whether people differentially attend to distinct regions of a face when judging different emotions. Experiment 1 measured eye movements while participants discriminated between emotional (joy, anger, fear, sadness, shame, and disgust) and neutral facial expressions. Participant eye movements primarily fell in five distinct regions (eyes, upper nose, lower nose, upper lip, nasion). Distinct fixation patterns emerged for each emotion, such as a focus on the lips for joyful faces and a focus on the eyes for sad faces. These patterns were strongest for emotional faces but were still present when viewers sought evidence of emotion within neutral faces, indicating a goal-driven influence on eye-gaze patterns. Experiment 2 verified that these fixation patterns tended to reflect attention to the most diagnostic regions of the face for each emotion. Eye movements appear to follow both stimulus-driven and goal-driven perceptual strategies when decoding emotional information from a face.
Limits on the storage capacity of working memory have been investigated for decades, but the nature of those limits remains elusive. An important but largely overlooked consideration in this research concerns the relationship between the physical properties of stimuli used in visual working memory tasks and their psychological properties. Here, we show that the relationship between physical distance in stimulus space and the psychological confusability of items as measured in a perceptual task is non-linear. Taking into account this relationship leads to a parsimonious conceptualization of visual working memory, greatly simplifying the models needed to account for performance, allowing generalization to new stimulus spaces, and providing a mapping between tasks that have been thought to measure distinct qualities. In particular, performance across a variety of working memory tasks can be explained by a oneparameter model implemented within a signal detection framework. Moreover, despite the system-level distinctions between working and long-term memory, after taking into account psychological distance we find a strong affinity between the theoretical frameworks that guide both systems, as performance is accurately described using the same straightforward signal detection framework.
Limits on the storage capacity of working memory have been investigated for decades, but the nature of those limits remains elusive. An important but largely overlooked consideration in this research concerns the relationship between the physical properties of stimuli used in visual working memory tasks and their psychological properties. Here, we show that the relationship between physical distance in stimulus space and the psychological confusability of items as measured in a perceptual task is non-linear. Taking into account this relationship leads to a parsimonious conceptualization of visual working memory, greatly simplifying the models needed to account for performance, allowing generalization to new stimulus spaces, and providing a mapping between tasks that have been thought to measure distinct qualities. In particular, performance across a variety of working memory tasks can be explained by a oneparameter model implemented within a signal detection framework. Moreover, despite the system-level distinctions between working and long-term memory, after taking into account psychological distance we find a strong affinity between the theoretical frameworks that guide both systems, as performance is accurately described using the same straightforward signal detection framework.
The majority of research on visual memory has taken a compartmentalized approach, focusing exclusively on memory over shorter or longer durations, that is, visual working memory (VWM) or visual episodic long-term memory (VLTM), respectively. This tutorial provides a review spanning the two areas, with readers in mind who may only be familiar with one or the other. The review is divided into six sections. It starts by distinguishing VWM and VLTM from one another, in terms of how they are generally defined and their relative functions. This is followed by a review of the major theories and methods guiding VLTM and VWM research. The final section is devoted toward identifying points of overlap and distinction across the two literatures to provide a synthesis that will inform future research in both fields. By more intimately relating methods and theories from VWM and VLTM to one another, new advances can be made that may shed light on the kinds of representational content and structure supporting human visual memory.
Pain perception temporarily exaggerates abrupt thermal stimulus changes revealing a mechanism for nociceptive temporal contrast enhancement (TCE). Although the mechanism is unknown, a non-linear model with perceptual feedback accurately simulates the phenomenon. Here we test if a mechanism in the central nervous system underlies thermal TCE. Our model successfully predicted an optimal stimulus, incorporating a transient temperature offset (step-up/step-down), with maximal TCE, resulting in psychophysically verified large decrements in pain response (“offset-analgesia”; mean analgesia: 85%, n = 20 subjects). Next, this stimulus was delivered using two thermodes, one delivering the longer duration baseline temperature pulse and the other superimposing a short higher temperature pulse. The two stimuli were applied simultaneously either near or far on the same arm, or on opposite arms. Spatial separation across multiple peripheral receptive fields ensures the composite stimulus timecourse is first reconstituted in the central nervous system. Following ipsilateral stimulus cessation on the high temperature thermode, but before cessation of the low temperature stimulus properties of TCE were observed both for individual subjects and in group-mean responses. This demonstrates a central integration mechanism is sufficient to evoke painful thermal TCE, an essential step in transforming transient afferent nociceptive signals into a stable pain perception.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.