Recent work from several groups has shown that perception of various visual attributes in human observers at a given moment is biased towards what was recently seen. This positive serial dependency is a kind of temporal averaging which exploits short-term correlations in visual scenes to reduce noise and stabilize perception. Here we test for serial dependencies in perception of head and eye direction using a simple reproduction method to measure perceived head/eye gaze direction in rapid sequences of briefly presented face stimuli. In a series of three experiments, our results reveal that perceived eye gaze direction shows a positive serial dependency for changes in eye direction, along both the vertical and horizontal dimensions, although more strongly for horizontal gaze shifts. By contrast, we found no serial dependency at all for horizontal changes in head position. These findings show that a perception-stabilizing 'continuity field' operates on eye position-well known to be quite variable over short timescales-while the more inherently stable signal from head position does not.
Attention and working memory are 2 key pillars of cognition. Despite much research, there are important aspects about the relationship between the 2 constructs that are not well understood. Here we explore the similarity in the mechanisms that select and update working memory to those that guide attention during perception, such as in visual search. We use a memory search task where participants memorize a display of objects on a grid. During memory maintenance, participants are instructed to update the spatial positions of a subset of objects. The speed of the updating process should reflect the accessibility of the to-be-updated subset. Using this task, we explored whether landmark findings in visual search would hold true for memory search. In Experiment 1, we found a search asymmetry—it was easier to access memory representations defined by a feature than defined by the lack of a feature. In Experiment 2, we found target-distractor similarity effects—updating a single target was easier when the distractors were farther away in feature space. In Experiment 3, we found a feature versus conjunction benefit—access times were much faster for instructions to move objects defined by only 1 feature (e.g., all triangles) as opposed to a conjunction of features (e.g., all red triangles). In Experiment 4, we find a set-size effect—update times increased with the number of items in memory, particularly for conjunctive stimuli. Taken together, our results suggest a common coding and selection scheme for working memory and perceptual representations.
Visual search for color is thought to be performed either using color-opponent processes, or through the comparison of unique color categories. In the present study, we investigate these theories by using displays with a red or green hue, but varying levels of saturation. The linearly inseparable nature of this display makes search for the midsaturated target inefficient. A genetic algorithm was employed, which evolved the distractors in a search display to reveal the processes that people use to search color. Results show that participants were able to search within only midsaturated red items, but not within only midsaturated green items, providing evidence for color categories, as in English there is a basic color label for midsaturated red (i.e., pink), but not for midsaturated green. A follow-up experiment revealed that it was possible to search within midsaturated green items if the exact target color was primed before each trial. We therefore suggest that both priming and a unique color category increase the recognizability of the target color, which has been speculated to increase visual search performance. (PsycINFO Database Record
Saccadic eye movements cause large-scale transformations of the image falling on the retina. Rather than starting visual processing anew after each saccade, the visual system combines post-saccadic information with visual input from before the saccade. Crucially, the relative contribution of each source of information is weighted according to its precision, consistent with principles of optimal integration. We reasoned that, if pre-saccadic input is maintained in a resource-limited store, such as visual working memory, its precision will depend on the number of items stored, as well as their attentional priority. Observers estimated the color of stimuli that changed imperceptibly during a saccade, and we examined where reports fell on the continuum between pre- and post-saccadic values. Bias toward the post-saccadic color increased with the set size of the pre-saccadic display, consistent with an increased weighting of the post-saccadic input as precision of the pre-saccadic representation declined. In a second experiment, we investigated if transsaccadic memory resources are preferentially allocated to attentionally prioritized items. An arrow cue indicated one pre-saccadic item as more likely to be chosen for report. As predicted, valid cues increased response precision and biased responses toward the pre-saccadic color. We conclude that transsaccadic integration relies on a limited memory resource that is flexibly distributed between pre-saccadic stimuli.
We investigated orientation categories in the guidance of attention in visual search. In the first two experiments, participants had a limited amount of time to find a target line among distractors lines. We systematically varied the orientation of the target and the angular difference between the target and distractors. We find vertical, horizontal, and 45° targets require the least target/distractor angular difference to be found reliably and that the rate at which increases in target/distractor difference decrease search difficulty to be independent of target identity. Unexpectedly, even when the angular difference between the target and distractors was large, search performance was never optimal when the target orientation was 45°. A third experiment investigates this unexpected finding by correlating target/distractor difference and error rate with performance on tasks that measure a specific perceptual or cognitive ability. We find that the elevated error rate is correlated with performance on stimulus recognition and identification tasks, while the amount of target/distractor difference needed to detect the target reliably is correlated with performance on a stimulus reproduction task. We conclude that the target/distractor difference reveals the number of orientation categories in visual search, and, accordingly, that there are four such categories: two strong ones centred on 0° and 90° and two weak ones centred on 45° and 135°.
Attentional mechanisms in perception can operate over locations, features, or objects. However, people direct attention not only towards information in the external world, but also to information maintained in working memory. To what extent do perception and memory draw on similar selection properties? Here we examined whether principles of object-based attention can also hold true in visual working memory. Experiment 1 examined whether object structure guides selection independently of spatial distance. In a memory updating task, participants encoded two rectangular bars with colored ends before updating two colors during maintenance. Memory updates were faster for two equidistant colors on the same object than on different objects. Experiment 2 examined whether selection of a single object feature spreads to other features within the same object. Participants memorized two sequentially presented Gabors, and a retro-cue indicated which object and feature dimension (color or orientation) would be most relevant to the memory test. We found stronger effects of object selection than feature selection: accuracy was higher for the uncued feature in the same object than the cued feature in the other object. Together these findings demonstrate effects of object-based attention on visual working memory, at least when object-based representations are encouraged, and suggest shared attentional mechanisms across perception and memory.
In the present study, we examine how observers search among complex displays. Participants were asked to search for a big red horizontal line among 119 distractor lines of various sizes, orientations and colours, leading to 36 different feature combinations. To understand how people search in such a heterogeneous display, we evolved the search display by using a genetic algorithm (Experiment 1). The best displays (i.e., displays corresponding to the fastest reaction times) were selected and combined to create new, evolved displays. Search times declined over generations. Results show that items sharing the same colour and orientation as the target disappeared over generations, implying they interfered with search, but items sharing the same colour and were 12.5° different in orientation only interfered if they were also the same size. Furthermore, and inconsistent with most dominant visual search theories, we found that non-red horizontal distractors increased over generations, indicating that these distractors facilitated visual search while participants were searching for a big red horizontally oriented target. In Experiments 2 and 3, we replicated these results using conventional, factorial experiments. Interestingly, in Experiment 4, we found that this facilitation effect was only present when the displays were very heterogeneous. While current models of visual search are able to successfully describe search in homogeneous displays, our results challenge the ability of these models to describe visual search in heterogeneous environments.
When searching for a specific object, we often form an image of the target, which we use as a search template. This template is thought to be maintained in working memory, primarily because of evidence that the contents of working memory influences search behavior. However, it is unknown whether this interaction applies in both directions. Here, we show that changes in search templates influence working memory. Participants were asked to remember the orientation of a line that changed every trial, and on some trials (75%) search for that orientation, but on remaining trials recall the orientation. Critically, we manipulated the target template by introducing a predictable context-distractors in the visual search task were always counterclockwise (or clockwise) from the search target. The predictable context produced a large bias in search. Importantly, we also found a similar bias in orientation memory reports, demonstrating that working memory and target templates were not held as completely separate, isolated representations. However, the memory bias was considerably smaller than the search bias, suggesting that, although there is a common source, the two may not be driven by a single, shared process. to working memory representations (Soto, Hodsoll, Rotshtein, & Humphreys, 2008), at least when targets change on a per trial basis (Woodman, Luck, & Schall, 2007). Versions of this theory vary in the nature of the relationship between the two constructs. According to some theories, being stored in working memory may be necessary but not sufficient for a representation to be a template (Dube & Al-Aidroos, 2019; Hollingworth & Hwang, 2013), that is, templates require some additional top-down process, such as attention (Gunseli, Meeter, & Olivers, 2014; van Driel, Gunseli, Meeter, & Olivers, 2017). There is evidence that attentional templates have independent properties from working memory representations, suggesting that the two can be dissociated (Carlisle & Woodman, 2011, 2013; Kerzel, 2019). However, most theories tend to favor a strong link between the two constructs. Evidence for the overlap of templates and working memory representations comes largely from studies showing that the contents of working memory influence attention, commonly referred to as memory-driven attentional capture (Downing, 2000; Olivers, Meijer, & Theeuwes, 2006; Soto, Heinke, Humphreys, & Blanco, 2005). Here we examine this interaction in the other direction. Previous research has found that memory representations improve due to visual search (Rajsic, Ouslis, Wilson, & Pratt, 2017; Williams, Henderson, & Zacks, 2005). However, it is difficult to disentangle the memory improvement because of an item becoming the search template from the general memory improvement that comes from attending to an item for longer, either while present or within memory (i.e., the retro-cue effect; Griffin & Nobre, 2003). Instead, we will directly ask whether changes in the target template will influence memory reports. If target templates are equivalent to me...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.