Despite evidence pointing to a ubiquitous tendency of human minds to wander, little is known about the neural operations that support this core component of human cognition. Using both thought sampling and brain imaging, the current investigation demonstrated that mind-wandering is associated with activity in a default network of cortical regions that are active when the brain is "at rest." In addition, individuals' reports of the tendency of their minds to wander were correlated with activity in this network.
Human learning is a complex phenomenon requiring flexibility to adapt existing brain function and precision in selecting new neurophysiological activities to drive desired behavior. These two attributes-flexibility and selection-must operate over multiple temporal scales as performance of a skill changes from being slow and challenging to being fast and automatic. Such selective adaptability is naturally provided by modular structure, which plays a critical role in evolution, development, and optimal network function. Using functional connectivity measurements of brain activity acquired from initial training through mastery of a simple motor skill, we investigate the role of modularity in human learning by identifying dynamic changes of modular organization spanning multiple temporal scales. Our results indicate that flexibility, which we measure by the allegiance of nodes to modules, in one experimental session predicts the relative amount of learning in a future session. We also develop a general statistical framework for the identification of modular architectures in evolving systems, which is broadly applicable to disciplines where network adaptability is crucial to the understanding of system performance.complex network | time-dependent network | fMRI | motor learning | community structure T he brain is a complex system, composed of many interacting parts, which dynamically adapts to a continually changing environment over multiple temporal scales. Over relatively short temporal scales, rapid adaptation and continuous evolution of those interactions or connections form the neurophysiological basis for behavioral adaptation or learning. At small spatial scales, stable neurophysiological signatures of learning have been best demonstrated in animal systems at the level of individual synapses between neurons (1-3). At a larger spatial scale, it is also well-known that specific regional changes in brain activity and effective connectivity accompany many forms of learning in humans-including the acquisition of motor skills (4, 5).Learning-associated adaptability is thought to stem from the principle of cortical modularity (6). Modular, or nearly decomposable (7), structures are aggregates of small subsystems (modules) that can perform specific functions without perturbing the remainder of the system. Such structure provides a combination of compartmentalization and redundancy, which reduces the interdependence of components, enhances robustness, and facilitates behavioral adaptation (8, 9). Modular organization also confers evolvability on a system by reducing constraints on change (8,(10)(11)(12). Indeed, a putative relationship between modularity and adaptability in the context of human neuroscience has recently been posited (13,14). To date, however, the existence of modularity in large-scale cortical connectivity during learning has not been tested directly.Based on the aforementioned theoretical and empirical grounds, we hypothesized that the principle of modularity would characterize the fundamental organiz...
Cognitive function is driven by dynamic interactions between large-scale neural circuits or networks, enabling behaviour. However, fundamental principles constraining these dynamic network processes have remained elusive. Here we use tools from control and network theories to offer a mechanistic explanation for how the brain moves between cognitive states drawn from the network organization of white matter microstructure. Our results suggest that densely connected areas, particularly in the default mode system, facilitate the movement of the brain to many easily reachable states. Weakly connected areas, particularly in cognitive control systems, facilitate the movement of the brain to difficult-to-reach states. Areas located on the boundary between network communities, particularly in attentional control systems, facilitate the integration or segregation of diverse cognitive systems. Our results suggest that structural network differences between cognitive circuits dictate their distinct roles in controlling trajectories of brain network function.
The registration algorithm described is a robust and flexible tool that can be used to address a variety of image registration problems. Registration strategies can be tailored to meet different needs by optimizing tradeoffs between speed and accuracy.
Distributed networks of brain areas interact with one another in a time-varying fashion to enable complex cognitive and sensorimotor functions. Here we used new network-analysis algorithms to test the recruitment and integration of large-scale functional neural circuitry during learning. Using functional magnetic resonance imaging data acquired from healthy human participants, we investigated changes in the architecture of functional connectivity patterns that promote learning from initial training through mastery of a simple motor skill. Our results show that learning induces an autonomy of sensorimotor systems and that the release of cognitive control hubs in frontal and cingulate cortices predicts individual differences in the rate of learning on other days of practice. Our general statistical approach is applicable across other cognitive domains and provides a key to understanding time-resolved interactions between distributed neural circuits that enable task performance.
We describe techniques for the robust detection of community structure in some classes of timedependent networks. Specifically, we consider the use of statistical null models for facilitating the principled identification of structural modules in semi-decomposable systems. Null models play an important role both in the optimization of quality functions such as modularity and in the subsequent assessment of the statistical validity of identified community structure. We examine the sensitivity of such methods to model parameters and show how comparisons to null models can help identify system scales. By considering a large number of optimizations, we quantify the variance of network diagnostics over optimizations ("optimization variance") and over randomizations of network structure ("randomization variance"). Because the modularity quality function typically has a large number of nearly degenerate local optima for networks constructed using real data, we develop a method to construct representative partitions that uses a null model to correct for statistical noise in sets of partitions. To illustrate our results, we employ ensembles of time-dependent networks extracted from both nonlinear oscillators and empirical neuroscience data. Many social, physical, technological, and biological systems can be modeled as networks composed of numerous interacting parts. 1 As an increasing amount of time-resolved data has become available, it has become increasingly important to develop methods to quantify and characterize dynamic properties of temporal networks. 2 Generalizing the study of static networks, which are typically represented using graphs, to temporal networks entails the consideration of nodes (representing entities) and/or edges (representing ties between entities) that vary in time. As one considers data with more complicated structures, the appropriate network analyses must become increasingly nuanced. In the present paper, we discuss methods for algorithmic detection of dense clusters of nodes (i.e., communities) by optimizing quality functions on multilayer network representations of temporal networks. 3,4 We emphasize the development and analysis of different types of null-model networks, whose appropriateness depends on the structure of the networks one is studying as well as the construction of representative partitions that take advantage of a multilayer network framework. To illustrate our ideas, we use ensembles of time-dependent networks from the human brain and human behavior.
Pleasant or aversive events are better remembered than neutral events. Emotional enhancement of episodic memory has been linked to the amygdala in animal and neuropsychological studies. Using positron emission tomography, we show that bilateral amygdala activity during memory encoding is correlated with enhanced episodic recognition memory for both pleasant and aversive visual stimuli relative to neutral stimuli, and that this relationship is specific to emotional stimuli. Furthermore, data suggest that the amygdala enhances episodic memory in part through modulation of hippocampal activity. The human amygdala seems to modulate the strength of conscious memory for events according to emotional importance, regardless of whether the emotion is pleasant or aversive.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.