Attention capture is often operationally defined as speeded search performance when an otherwise nonpredictive stimulus happens to be the target of a visual search. That is, if a stimulus captures attention, it should be searched with priority even when it is irrelevant to the task. Given this definition, only the abrupt appearance of a new object (see, e.g., Jonides & Yantis, 1988) and one type of luminance contrast change (Enns, Austen, Di Lollo, Rauschenberger, & Yantis, 2001) have been shown to strongly capture attention. We show that translating and looming stimuli also capture attention. This phenomenon does not occur for all dynamic events: We also show that receding stimuli do not attract attention. Although the sorts of dynamic events that capture attention do not fit neatly into a single category, we speculate that stimuli that signal potentially behaviorally urgent events are more likely to receive attentional priority.
Much of our interaction with the visual world requires us to isolate some currently important objects from other less important objects. This task becomes more difficult when objects move, or when our field of view moves relative to the world, requiring us to track these objects over space and time. Previous experiments have shown that observers can track a maximum of about 4 moving objects. A natural explanation for this capacity limit is that the visual system is architecturally limited to handling a fixed number of objects at once, a so-called magical number 4 on visual attention. In contrast to this view, Experiment 1 shows that tracking capacity is not fixed. At slow speeds it is possible to track up to 8 objects, and yet there are fast speeds at which only a single object can be tracked. Experiment 2 suggests that that the limit on tracking is related to the spatial resolution of attention. These findings suggest that the number of objects that can be tracked is primarily set by a flexibly allocated resource, which has important implications for the mechanisms of object tracking and for the relationship between object tracking and other cognitive processes.
When distinguishing whether a face displays a certain emotion, some regions of the face may contain more useful information than others. Here we ask whether people differentially attend to distinct regions of a face when judging different emotions. Experiment 1 measured eye movements while participants discriminated between emotional (joy, anger, fear, sadness, shame, and disgust) and neutral facial expressions. Participant eye movements primarily fell in five distinct regions (eyes, upper nose, lower nose, upper lip, nasion). Distinct fixation patterns emerged for each emotion, such as a focus on the lips for joyful faces and a focus on the eyes for sad faces. These patterns were strongest for emotional faces but were still present when viewers sought evidence of emotion within neutral faces, indicating a goal-driven influence on eye-gaze patterns. Experiment 2 verified that these fixation patterns tended to reflect attention to the most diagnostic regions of the face for each emotion. Eye movements appear to follow both stimulus-driven and goal-driven perceptual strategies when decoding emotional information from a face.
The brain has finite processing resources so that, as tasks become harder, performance degrades. Where do the limits on these resources come from? We focus on a variety of capacity-limited buffers related to attention, recognition, and memory that we claim have a two-dimensional ‘map’ architecture, where individual items compete for cortical real estate. This competitive format leads to capacity limits that are flexible, set by the nature of the content and their locations within an anatomically delimited space. We contrast this format with the standard ‘slot’ architecture and its fixed capacity. Using visual spatial attention and visual short-term memory as case studies, we suggest that competitive maps are a concrete and plausible architecture that limits cognitive capacity across many domains.
In dealing with a dynamic world, people have the ability to maintain selective attention on a subset of moving objects in the environment. Performance in such multiple-object tracking is limited by three primary factors-the number of objects that one can track, the speed at which one can track them, and how close together they can be. We argue that this last limit, of object spacing, is the root cause of all performance constraints in multiple-object tracking. In two experiments, we found that as long as the distribution of object spacing is held constant, tracking performance is unaffected by large changes in object speed and tracking time. These results suggest that barring object-spacing constraints, people could reliably track an unlimited number of objects as fast as they could track a single object.
Despite years of research yielding systems and guidelines to aid visualization design, practitioners still face the challenge of identifying the best visualization for a given dataset and task. One promising approach to circumvent this problem is to leverage perceptual laws to quantitatively evaluate the effectiveness of a visualization design. Following previously established methodologies, we conduct a large scale (n=1687) crowdsourced experiment to investigate whether the perception of correlation in nine commonly used visualizations can be modeled using Weber's law. The results of this experiment contribute to our understanding of information visualization by establishing that: (1) for all tested visualizations, the precision of correlation judgment could be modeled by Weber's law, (2) correlation judgment precision showed striking variation between negatively and positively correlated data, and (3) Weber models provide a concise means to quantify, compare, and rank the perceptual precision afforded by a visualization.
The visual system relies on several heuristics to direct attention to important locations and objects. One of these mechanisms directs attention to sudden changes in the environment. Although a substantial body of research suggests that this capture of attention occurs only for the abrupt appearance of a new perceptual object, more recent evidence shows that some luminance-based transients (e.g., motion and looming) and some types of brightness change also capture attention. These findings show that new objects are not necessary for attention capture. The present study tested whether they are even sufficient. That is, does a new object attract attention because the visual system is sensitive to new objects or because it is sensitive to the transients that new objects create? In two experiments using a visual search task, new objects did not capture attention unless they created a strong local luminance transient.
Findings from studies of visual memory and change detection have revealed a surprising inability to detect large changes to scenes from one view to the next ('change blindness'). When some form of disruption is introduced between an original and modified display, observers often fail to notice the change. This disruption can take many forms (e.g. an eye movement, a flashed blank screen, a blink, a cut in a motion picture, etc) with similar results. In all cases, the changes are sufficiently large that, were they to occur instantaneously, they would consistently be detected. Prior research on change blindness was predicated on the assumption that, in the absence of a visual disruption, the signal caused by the change would draw attention, leading to detection. In two experiments, we demonstrate that change blindness can occur even in the absence of a visual disruption. In one experiment, subjects actually detected more changes with a disruption than without one. When changes are sufficiently gradual, the visible change signal does not seem to draw attention, and large changes can go undetected. The findings are discussed in the context of metacognitive beliefs about change detection and the strategic decisions those beliefs entail.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.