The notion that inhibitory processes play a critical role in selective attention has gained wide support. Much of this support derives from studies of negative priming. The authors note that the attribution of negative priming to an inhibitory mechanism of attention draws its support from a common assumption underlying priming procedures, together with the procedure that has been used to measure negative priming. The results from a series of experiments demonstrate that selection between 2 competing prime items is not required to observe negative priming. This result is demonstrated across several experiments in which participants named 1 of 2 items in a second display following presentation of a single-item prime. The implications of these results for existing theories of negative priming are discussed, and a theoretical framework for interpreting negative priming and several related phenomena is forwarded.
Selective visual attention can strongly inf luence perceptual processing, even for apparently low-level visual stimuli. Although it is largely accepted that attention modulates neural activity in extrastriate visual cortex, the extent to which attention operates in the first cortical stage, striate visual cortex (area V1), remains controversial. Here, functional MRI was used at high field strength (3 T) to study humans during attentionally demanding visual discriminations. Similar, robust attentional modulations were observed in both striate and extrastriate cortical areas. Functional mapping of cortical retinotopy demonstrates that attentional modulations were spatially specific, enhancing responses to attended stimuli and suppressing responses when attention was directed elsewhere. The spatial pattern of modulation reveals a complex attentional window that is consistent with object-based attention but is inconsistent with a simple attentional spotlight. These data suggest that neural processing in V1 is not governed simply by sensory stimulation, but, like extrastriate regions, V1 can be strongly and specifically inf luenced by attention.It has long been appreciated that selective attention can dramatically affect high-level visual perception (1). More recently, attention has been shown to influence low-level visual phenomena such as luminance detection (2, 3), motion perception (4, 5), orientation discrimination (6), contour detection (7), hyperacuity (6), and even "preattentive" visual search (8). These modulations of perception appear to result from selective spatial attention, because they depend on the location of directed attention. These studies exploited the fact that attention and eye position need not be directed to the same location; that is, attention may be covert (2). Under the same fixation conditions attention may be either directed toward a test stimulus, directed elsewhere, or not directed. Striking differences have been revealed when performance under directed attention is compared with performance when attention is engaged in a highly distracting task, such as identifying letters in a rapid serial visual presentation (RSVP) stream (5, 8, 9).These phenomena indicate that attention operates at low levels of visual processing, but they do not identify the specific cortical areas in which processing is influenced by attention. This question of the locus of selection is fundamental to the cognitive neuroscience of attention. Theories suggest that processing of the attended representation is enhanced and͞or that processing of unattended representations is suppressed. Enhancement and suppression may act directly on cells in the lower tiers of visual cortex that code retinotopic location or may act at higher cortical areas (10). One long-held theory, the spotlight model of spatial selection (2, 11), suggests that attention is directed to a connected visual field region that contains no topological holes. A competing theory, objectbased selection (12), suggests that attention is directed to ...
If two targets (Tl and T2) are to be identified among other stimuli displayed in rapid serial visual presentation (RSVP), correct identification of Tl can produce an attentional blink (AB) lasting several hundred milliseconds, during which detection of T2 is impaired. Experiment 1 confirmed that omission of the item directly following Tl (the +1 item) reduces the AB (J. E. Raymond, K. L. Shapiro, & K. M. Arnell, 1992). The next 3 experiments varied the spatial and temporal relationships between Tl and the +1 item to study how masking of Tl affects the AB deficit. Perception of Tl was impaired by pattern masking arising from temporal integration or superimposition of Tl and die +1 item; it was also impaired by metacontrast masking. We conclude that masking affects the AB indirectly by degrading Tl thereby increasing the duration of Tl processing. A 2-stage model proposed by M. M. Chun and M. C. Potter (1995) is supported. Sequential visual stimuli-as might be generated from objects in motion or from eye movements in everyday viewing-can pose a problem for human information processing. If new stimuli arrive at a rate that exceeds the visual system's temporal capabilities, some may fail to be perceived. With stimuli shown in a rapid serial visual presentation (RSVP), this failure has been attributed to attentional processes: If two targets are to be identified among distractors in an RSVP stream, correct identification of the first target (Tl) may produce an attentional blink (AB) lasting several hundred milliseconds, during which detection of the second target (T2) is impaired (Raymond, Shapiro, & Ar- nell, 1992). In the present work we show that, in addition to high-level attentional processes, low-level visual processes-notably masking by pattern and metacontrast masking-contribute to the AB deficit. Such low-level processes provide an explanatory basis for several phenomena that cannot be accounted for directly in current theories. The effect of low-level processes can be studied by examining the role played by the distractor that comes directly after Tl in the RSVP stream.Presence of a distractor item in the frame directly following Tl has been regarded as essential for obtaining an AB deficit. If that item (known as the +1 item) is replaced by a
Similar to the eye movements you might make when viewing a sports game, this experiment investigated where participants tend to look while keeping track of multiple objects. While eye movements were recorded, participants tracked either 1 or 3 of 8 red dots that moved randomly within a square box on a black background. Results indicated that participants fixated closer to targets more often than to distractors. However, on 3-target trials, fixation was closer to the center of the triangle formed by the targets more often than to any individual targets. This center-looking strategy seemed to reflect that people were grouping the targets into a single object rather than simultaneously minimizing all target eccentricities. Here we find that observers deliberately focus their eyes on a location that is different from the objects they are attending, perhaps as a consequence of representing those objects as a group.
Attentional demands of multiple-object tracking were demonstrated using a dual-task paradigm. Participants were asked to make speeded responses based on the pitch of a tone, while at the same time tracking four of eight identical dots. Tracking difficulty was manipulated either concurrent with or after the tone task. If increasing tracking difficulty increases attentional demands, its effect should be larger when it occurs concurrent with the tone. In Experiment 1, tracking difficulty was manipulated by having all dots briefly attract one another on some trials, causing a transient increase in dot proximity and speed. Results showed that increasing proximity and speed had a significantly larger effect when it occurred at the same time as the tone task. Experiments 2 and 3 showed that manipulating either proximity or speed independently was sufficient to produce this pattern of results. Experiment 4 manipulated object contrast, which affected tracking performance equally whether it occurred concurrent with or after the tone task. Overall, results support the view that the momentto-moment tracking of multiple objects demands attention. Understanding what factors increase the attentional demands of tracking may help to explain why tracking is sometimes successful and at other times fails.
Motion detection can be achieved either with mechanisms sensitive to a target's velocity, or sensitive to change in a target's position. Using a procedure to dissociate these two provided by Nakayama and Tyler (Vis Res 1981;21:427-433), we explored detection of first-order (luminance-based) and various second-order (texture-based and stereo-based) motion. In the first experiment, observers viewed annular gratings oscillating in rotational motion at various rates. For each oscillation temporal frequency, we determined the minimum displacement of the pattern for which observers could reliably see motion. For first-order motion, these motion detection thresholds decreased with increasing temporal frequency, and thus were determined by a minimum velocity. In contrast, motion detection thresholds for second-order motion remained roughly constant across temporal frequency, and thus were determined by a minimum displacement. In Experiment 2, luminance-based gratings of different contrasts were tested to show that the velocity-dependence was not an artifact of pattern visibility. In the remaining experiments, results similar to Experiment 1 were obtained with a central presentation of a linear grating, instead of an annular grating (Experiment 3), and with a motion discrimination (phase discrimination) rather than motion detection task (Experiment 4). We conclude that, within the ranges tested here, second-order motion is more readily detected with a mechanism which tracks the change of position of features over time.
Motion of an object is thought to be perceived independently of the object's surface properties. However, theoretical, neuropsychological and psychophysical observations have suggested that motion of textures, called 'second-order motion', may be processed by a separate system from luminance-based, or 'first-order', motion. Functional magnetic resonance imaging (fMRI) responses during passive viewing, attentional modulation and post-adaptation motion after-effects (MAE) of these stimuli were measured in seven retinotopic visual areas (labeled V1, V2, V3, VP, V4v, V3A and LO) and the motion-sensitive area MT/MST (V5). In all visual areas, responses were strikingly similar to motion of first- and second-order stimuli. These results differ from a prior investigation, because here the motion-specific responses were isolated. Directing attention towards and away from the motion elicited equivalent response modulation for the two types. Dramatic post-adaptation (MAE) differences in perception of the two stimuli were observed and fMRI activation mimicked perceptual changes, but did not reveal the processing differences. In fact, no visual area was found to respond selectively to the motion of second-order stimuli, suggesting that motion perception arises from a unified motion detection system.
People can attend to and track multiple moving objects over time. Cognitive theories of this ability emphasize location information and differ on the importance of motion information. Results from several experiments have shown that increasing object speed impairs performance, although speed was confounded with other properties such as proximity of objects to one another. Here, we introduce a new paradigm to study multiple object tracking in which object speed and object proximity were manipulated independently. Like the motion of a planet and moon, each target–distractor pair rotated about both a common local point as well as the center of the screen. Tracking performance was strongly affected by object speed even when proximity was controlled. Additional results suggest that two different mechanisms are used in object tracking—one sensitive to speed and proximity and the other sensitive to the number of distractors. These observations support models of object tracking that include information about object motion and reject models that use location alone.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.