Our visual brain is remarkable in extracting invariant properties from the noisy environment, guiding selection of where to look and what to identify. However, how the brain achieves this is still poorly understood. Here we explore interactions of local context and global structure in the long-term learning and retrieval of invariant display properties. Participants searched for a target among distractors, without knowing that some "old" configurations were presented repeatedly (randomly inserted among "new" configurations). We simulated tunnel vision, limiting the visible region around fixation. Robust facilitation of performance for old versus new contexts was observed when the visible region was large but not when it was small. However, once the display was made fully visible during the subsequent transfer phase, facilitation did become manifest. Furthermore, when participants were given a brief preview of the total display layout prior to tunnel view search with 2 items visible, facilitation was already obtained during the learning phase. The eye movement results revealed contextual facilitation to be coupled with changes of saccadic planning, characterized by slightly extended gaze durations but a reduced number of fixations and shortened scan paths for old displays. Taken together, our findings show that invariant spatial display properties can be acquired based on scarce, para-/foveal information, while their effective retrieval for search guidance requires the availability (even if brief) of a certain extent of peripheral information.
In visual search, participants detect and subsequently discriminate targets more rapidly when these are embedded in repeatedly encountered distractor arrangements, an effect termed contextual cueing (Chun & Jiang Cognitive Psychology, 36, 28–71, 1998). However, whereas previous studies had explored contextual cueing exclusively in visual search, in the present study we examined the effect in tactile search using a novel tactile search paradigm. Participants were equipped with vibrotactile stimulators attached to four fingers on each hand. A given search array consisted of four stimuli (i.e., two items presented to each hand), with the target being an odd-one-out feature singleton that differed in frequency (Exps. 1 and 2) or waveform (Exp. 3) from the distractor elements. Participants performed a localization (Exps. 1 and 2) or discrimination (Exp. 3) task, delivering their responses via foot pedals. In all three experiments, reaction times were faster when the arrangement of distractor fingers predicted the target finger. Furthermore, participants were unable to explicitly discriminate repeated from nonrepeated tactile configurations (Exps. 2 and 3). This indicates that the tactile modality can mediate the formation of configural representations and use these representations to guide tactile search.
It is well established that statistical learning of visual target locations in relation to constantly positioned visual distractors facilitates visual search. In the present study, we investigated whether such a contextual-cueing effect would also work crossmodally, from touch onto vision. Participants responded to the orientation of a visual target singleton presented among seven homogenous visual distractors. Four tactile stimuli, two to different fingers of each hand, were presented either simultaneously with or prior to the visual stimuli. The identity of the stimulated fingers provided the crossmodal context cue: in half of the trials, a given visual target location was consistently paired with a given tactile configuration. The visual stimuli were presented above the unseen fingers, ensuring spatial correspondence between vision and touch. We found no evidence of crossmodal contextual cueing when the two sets of items (tactile, visual) were presented simultaneously (Experiment 1). However, a reliable crossmodal effect emerged when the tactile distractors preceded the onset of visual stimuli 700 ms (Experiment 2). But crossmodal cueing disappeared again when, after an initial learning phase, participants flipped their hands, making the tactile distractors appear at different positions in external space while their somatotopic positions remained unchanged (Experiment 3). In all experiments, participants were unable to explicitly discriminate learned from novel multisensory arrays. These findings indicate that search-facilitating context memory can be established across vision and touch. However, in order to guide visual search, the (predictive) tactile configurations must be remapped from their initial somatotopic into a common external representational format.
Repeated encounter of abstract target-distractor letter arrangements leads to improved visual search for such displays. This contextual-cueing effect is attributed to incidental learning of display configurations. Whether observers can consciously access the memory underlying the cueing effect is still a controversial issue. The current study uses a novel recognition task and eyetracking to tackle this question. Experiment 1 investigated observers’ ability to recognize or “generate” the display quadrant of the target in a previous search array when the target was now substituted by distractor element as well as where observers’ eye fixations would fall while they freely viewed the recognition display, examining the link between the fixation pattern and explicit recognition judgments. Experiment 2 tested whether eye fixations would serve a critical role for explicit retrieval from context memory. Experiment 3 asked whether eye fixations of the target region are critical for context-based facilitation of search reaction times to manifest. The results revealed longer fixational dwell times in the target quadrant for learned relative to foil displays. Further, explicit recognition was enhanced, and above chance level, when observers were made to fixate the target quadrant as compared to when they were prevented from doing so. However, the manifestation of contextual cueing of visual search did itself not require fixations of the target quadrant. Moreover, contextual-cueing of search reaction times was significantly correlated with both fixational dwell times and observers’ explicit generation performance. The results argue in favor of contextual cueing of visual search being the result of a single, explicit, memory system, though it could nevertheless receive support from separable—automatic versus controlled—retrieval processes. Fixational eye movements, that is, the directed overt allocation of visual attention, provide an interface between these processes in context cueing.
Invariant spatial context can expedite visual search, an effect that is known as contextual cueing (e.g., Chun & Jiang, 1998). However, disrupting learned display configurations abolishes the effect. In current touch-based mobile devices, such as the iPad, icons are shuffled and remapped when the display mode is changed. However, such remapping also disrupts the spatial relationships between icons. This may hamper usability. In the present study, we examined the transfer of contextual cueing in four different methods of display remapping: position-order invariant, global rotation, local invariant, and central invariant. We used full-icon landscape mode for training and both landscape and portrait modes for testing, to check whether the cueing transfers to portrait mode. The results showed transfer of contextual cueing but only with the local invariant and the central invariant remapping methods. We take the results to mean that the predictability of target locations is a crucial factor for the transfer of contextual cueing and thus icon remapping design for mobile devices.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.