's (1998) original study, a large body of research based on the contextual cuing paradigm has shown that the visuocognitive system is capable of capturing certain regularities in the environment in an implicit way. The present study investigated whether regularities based on the semantic category membership of the context can be learned implicitly and whether that learning depends on attention. The contextual cuing paradigm was used with lexical displays in which the semantic category of the contextual words either did or did not predict the target location. Experiments 1 and 2 revealed that implicit contextual cuing effects can be extended to semantic category regularities. Experiments 3 and 4 indicated an implicit contextual cuing effect when the predictive context appeared in an attended color but not when the predictive context appeared in an ignored color. However, when the previously ignored context suddenly became attended, it immediately facilitated performance. In contrast, when the previously attended context suddenly became ignored, no benefit was observed. Results suggest that the expression of implicit semantic knowledge depends on attention but that latent learning can nevertheless take place outside the attentional field.
In this report, we examine whether and how altered aspects of perception and attention near the hands affect one's learning of to-be-remembered visual material. We employed the contextual cuing paradigm of visual learning in two experiments. Participants searched for a target embedded within images of fractals and other complex geometrical patterns while either holding their hands near to or far from the stimuli. When visual features and structural patterns remained constant across to-be-learned images (Exp. 1), no difference emerged between hand postures in the observed rates of learning. However, when to-be-learned scenes maintained structural pattern information but changed in color (Exp. 2), participants exhibited substantially slower rates of learning when holding their hands near the material. This finding shows that learning near the hands is impaired in situations in which common information must be abstracted from visually unique images, suggesting a bias toward detail-oriented processing near the hands.
The affect-as-information hypothesis (e.g., Schwarz & Clore, 2003), predicts that the positive or negative valence of our mood differentially affects our processing of the details of the environment. However, this hypothesis has only been tested with mood induction procedures and fairly complex cognitive tasks in humans. Here, six baboons (Papio papio) living in a social group had free access to a computerized visual search task on which they were over-trained. Trials that immediately followed a spontaneously expressed emotional behavior were analyzed, ruling out possible biases due to induction procedures. RTs following negatively valenced behaviors are slower than those following neutral and positively valenced behaviors, respectively. Thus, moods affect the performance of nonhuman primates tested in highly automatized tasks, as it does in humans during tasks with much higher cognitive demands. These findings reveal a presumably universal and adaptive mechanism by which moods influence performance in various ecological contexts.
Previous research using the contextual cuing paradigm has revealed both quantitative and qualitative differences in learning depending on whether repeated contexts are defined by letter arrays or real-world scenes. To clarify the relative contributions of visual features and semantic information likely to account for such differences, the typical contextual cuing procedure was adapted to use meaningless but nevertheless visually complex images. The data in reaction time and in eye movements show that, like scenes, such repeated contexts can trigger large, stable, and explicit cuing effects, and that those effects result from facilitated attentional guidance. Like simpler stimulus arrays, however, those effects were impaired by a sudden change of a repeating image's color scheme at the end of the learning phase (Experiment 1), or when the repeated images were presented in a different and unique color scheme across each presentation (Experiment 2). In both cases, search was driven by explicit memory. Collectively, these results suggest that semantic information is not required for conscious awareness of context-target covariation, but it plays a primary role in overcoming variability in specific features within familiar displays.
Since the seminal study by Chun and Jiang (Cognitive Psychology, 36, 28-71, 1998), a large body of research based on the contextual-cueing paradigm has shown that the cognitive system is capable of extracting statistical contingencies from visual environments. Most of these studies have focused on how individuals learn regularities found within an intratrial temporal window: A context predicts the target position within a given trial. However, Ono, Jiang, and Kawahara (Journal of Experimental Psychology, 31, 703-712, 2005) provided evidence of an intertrial implicit-learning effect when a distractor configuration in preceding trials N - 1 predicted the target location in trials N. The aim of the present study was to gain further insight into this effect by examining whether it occurs when predictive relationships are impeded by interfering task-relevant noise (Experiments 2 and 3) or by a long delay (Experiments 1, 4, and 5). Our results replicated the intertrial contextual-cueing effect, which occurred in the condition of temporally close contingencies. However, there was no evidence of integration across long-range spatiotemporal contingencies, suggesting a temporal limitation of statistical learning.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.