In this study we investigated the role of attention, sequence structure, and effector specificity in learning a structured sequence of actions. Experiment 1 demonstrated that simple structured sequences can be learned in the presence of attentional distraction. The learning is unaffected by variation in distractor task difficulty, and subjects appear unaware of the structure. The structured sequence knowledge transfers from finger production to arm production {Experiment 2), suggesting that sequence specification resides in an effector-independent system. Experiments 3 and 4 demonstrated that only structures with at least some unique associations (e.g., any association in Structure 15243... or 4 to 3 in Structure 143132...) can be learned under attentional distraction. Structures with all items repeated in different orders in different parts of the structure (e.g., Sequence 132312...) require attention for learning. Such structures may require hierarchic representation, the construction of which takes attention. One of the remarkable capabilities of humans is their ability to learn a variety of novel tasks involving complex motor sequences. They learn to play the violin, knit, serve tennis balls, and perform a variety of language tasks such as speaking, typing, writing, or producing sign. This study addresses three features that might be involved in such learning: attention, structure of the sequence, and effector specificity. These three features will be discussed in succession. Attention and Sequence Learning A large variety of evidence indicates that attention is important in verbal learning. For example, the classic study by Peterson and Peterson (1959) showed that a numeric distractor produced a dramatic loss of recall of short letter strings. Similarly, Fisk and Schneider (1984) found judgment of frequency of previously presented words to drop to chance level when the words were presented concurrently with a numeric distractor. The learning was prevented even though the secondary numeric task was very different from the frequency judgment task. On the basis of these findings, Fisk and Schneider argued that general attentional resources are necessary for modifications of long-term memory. Does the learning of motor sequences also require attention? This question is especially relevant in light of the hypothesis that sequential learning can involve a different memory system, sometimes called procedural memory, than verbal learning or other declarative memory systems (cf. Mishkin &
This article addresses 2 questions that arise from the finding that visual scenes are first parsed into visual features: (a) the accumulation of location information about objects during their recognition and (b) the mechanism for the binding of the visual features. The first 2 experiments demonstrated that when 2 colored letters were presented outside the initial focus of attention, illusory conjunctions between the color of one letter and the shape of the other were formed only if the letters were less than 1 degree apart. Separation greater than 2 degrees resulted in fewer conjunction errors than expected by chance. Experiments 3 and 4 showed that inside the spread of attention, illusory conjunctions between the 2 letters can occur regardless of the distance between them. In addition, these experiments demonstrated that the span of attention can expand or shrink like a spotlight. The results suggest that features inside the focus of attention are integrated by an expandable focal attention mechanism that conjoins all features that appear inside its focus. Visual features outside the focus of attention may be registered with coarse location information prior to their integration. Alternatively, a quick and imprecise shift of attention to the periphery may lead to illusory conjunctions among adjacent stimuli.
Cross-dimensional visual search for single-feature targets that differed from the distractors along two dimensions (color and orientation) was compared with intradimensional search for targets that differed from the distractors along a single dimension (either orientation or color). The design of the first three experiments differed from those of previous studies in that participants were required to respond differently to each ofthe targets. Experiments 1-3 were similar except that in Experiment 1, the distractors were homogeneous; in Experiment 2, two types of distractors were used in equal proportions; and in Experiment 3, two types of distractors were used but one of the distractors was a singleton. The findings, contrary to those of previous studies, revealed that cross-dimensional search is at least as efficient and for some targets even more efficient than intradimensional search. These results suggest that the details of stimulus-to-response mapping are essential in comparing intra-and cross-dimensional tasks. Experiment 4 used a priming design and did not support an explanation based on grouping processes. We outline an explanation for all the findings based on a recent cross-dimensional response selection model by Cohen and Shoup (1997).In the visual search paradigm, participants are required to search for the presence of a target among a variable number of distractors. Previous studies have suggested that intradimensional search for single-feature targets that vary along a single dimension (e.g., all targets are defined by color) is more efficient than cross-dimensional search for single-feature targets (i.e., each target differs from the distractors along a different dimension). Common interpretations of these results have focused on differences in search processes between the tasks (e.g., Muller, Heller,
Studies of attentional capture by personally significant stimuli have reached inconsistent results, possibly because of improper control of the participants' attention. In the present study, the authors controlled visual attention by using a Stroop-like task. Participants responded to a central color and ignored a word presented either centrally (i.e., at the focus of attention) or peripherally (i.e., outside the focus of attention). Central words led to slower reaction times and larger orienting responses for significant items than for neutral items. These effects largely disappeared when the words appeared in a peripheral location. The peripheral words interfered with performance when they were relevant to task demands. These results indicate that there is a fundamental difference between task-relevant words and personally significant words: The former capture attention even when presented peripherally, whereas the latter do not.
Four experiments used the visual search paradigm to examine feature integration mechanisms. Reaction time to determine the presence or absence of a conjunctive target is relatively fast and exhaustive for low-density displays. Search rate is slow and self-terminating for high-density displays. Density effects do not arise when the target is defined by a unique feature. Two mechanisms are proposed for feature integration. A fast mechanism integrates features on the basis of coarse location information coded with the initial registration of the features. This coarse location mechanism requires that display items be spaced apart. A second, slower mechanism is used when objects are clumped together. The 2-mechanism hypothesis provides a resolution to conflicting findings in the visual search and illusory-conjunction literature. A possible interpretation of the findings with a single guided search mechanism for feature integration is also discussed.
The functional competence of extrageniculate visual pathways in hemianopic humans was demonstrated by showing that distractor signals in the blind half of the visual field could inhibit saccades toward targets in the intact visual field. This inhibitory effect of unseen distractors in patients occurred only when distractors were presented in the temporal half of the visual field, was specific to oculomotor responses, and did not occur in normal subjects. These results show that a peripheral visual signal activates retinotectal pathways to prime the oculomotor system and that these pathways can mediate orienting behavior in hemianopic humans.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.