Chunk decomposition plays an important role in cognitive flexibility in particular with regards to representational change, which is critical for insight problem solving and creative thinking. In this study, we investigated the cognitive mechanism of decomposing Chinese character chunks through a parametric fMRI design. Our results from this parametric manipulation revealed widely distributed activations in frontal, parietal, and occipital cortex and negative activations in parietal and visual areas in response to chunk tightness during decomposition. To mentally manipulate the element of a given old chunk, superior parietal lobe appears to support element restructuring in a goal-directed way, whereas the negatively activated inferior parietal lobe may support preventing irrelevant objects from being attended. Moreover, determining alternative ways of restructuring requires a constellation of frontal areas in the cognitive control network, such as the right lateral prefrontal cortex in inhibiting the predominant chunk representations, the presupplementary motor area in initiating a transition of mental task set, and the inferior frontal junction in establishing task sets. In conclusion, this suggests that chunk decomposition reflects mental transformation of problem representation from an inappropriate state to a new one alongside with an evaluation of novel and insightful solutions by the caudate in the dorsal striatum.
The visual world consists of spatial regularities that are acquired through experience in order to guide attentional orienting. For instance, in visual search, detection of a target is faster when a layout of nontarget items is encountered repeatedly, suggesting that learned contextual associations can guide attention (contextual cuing). However, scene layouts sometimes change, requiring observers to adapt previous memory representations. Here, we investigated the long-term dynamics of contextual adaptation after a permanent change of the target location. We observed fast and reliable learning of initial context-target associations after just three repetitions. However, adaptation of acquired contextual representations to relocated targets was slow and effortful, requiring 3 days of training with overall 80 repetitions. A final test 1 week later revealed equivalent effects of contextual cuing for both target locations, and these were comparable to the effects observed on day 1. That is, observers learned both initial target locations and relocated targets, given extensive training combined with extended periods of consolidation. Thus, while implicit contextual learning efficiently extracts statistical regularities of our environment at first, it is rather insensitive to change in the longer term, especially when subtle changes in context-target associations need to be acquired.
Visual search for a target object can be facilitated by the repeated presentation of an invariant configuration of nontargets (‘contextual cueing’). Here, we tested adaptation of learned contextual associations after a sudden, but permanent, relocation of the target. After an initial learning phase targets were relocated within their invariant contexts and repeatedly presented at new locations, before they returned to the initial locations. Contextual cueing for relocated targets was neither observed after numerous presentations nor after insertion of an overnight break. Further experiments investigated whether learning of additional, previously unseen context-target configurations is comparable to adaptation of existing contextual associations to change. In contrast to the lack of adaptation to changed target locations, contextual cueing developed for additional invariant configurations under identical training conditions. Moreover, across all experiments, presenting relocated targets or additional contexts did not interfere with contextual cueing of initially learned invariant configurations. Overall, the adaptation of contextual memory to changed target locations was severely constrained and unsuccessful in comparison to learning of an additional set of contexts, which suggests that contextual cueing facilitates search for only one repeated target location.
Invariant spatial context can facilitate visual search. For instance, detection of a target is faster if it is presented within a repeatedly encountered, as compared to a novel, layout of nontargets, demonstrating a role of contextual learning for attentional guidance ('contextual cueing'). Here, we investigated how context-based learning adapts to target location (and identity) changes. Three experiments were performed in which, in an initial learning phase, observers learned to associate a given context with a given target location. A subsequent test phase then introduced identity and/or location changes to the target. The results showed that contextual cueing could not compensate for target changes that were not 'predictable' (i.e. learnable). However, for predictable changes, contextual cueing remained effective even immediately after the change. These findings demonstrate that contextual cueing is adaptive to predictable target location changes. Under these conditions, learned contextual associations can be effectively 'remapped' to accommodate new task requirements.
Visual search for a target object is facilitated when it is repeatedly presented within an invariant context of surrounding items ('contextual cueing';Chun & Jiang, 1998). The current study investigated whether such invariant contexts can cue more than one target location. In a series of three experiments, we show that contextual cueing is significantly reduced when invariant contexts are paired with two rather than one possible target location, whereas no contextual cueing occurs with three distinct target locations. Closer data inspection revealed that one 'dominant' target always exhibited substantially more contextual cueing than the other 'minor' target, which caused negative contextual-cueing effects. However, minor targets could benefit from the invariant context when they were spatially close to the dominant target.In sum, our experiments suggest that contextual cueing can guide visual attention to a spatially limited region of the display, only enhancing the detection of targets presented inside that region.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.