InfoMax and FastICA are the independent component analysis algorithms most used and apparently most effective for brain fMRI. We show that this is linked to their ability to handle effectively sparse components rather than independent components as such. The mathematical design of better analysis tools for brain fMRI should thus emphasize other mathematical characteristics than independence.I ndependent component analysis (ICA), a framework for separating a mixture of different components into its constituents, has been proposed for many applications, including functional magnetic resonance imaging (fMRI) (1-3). Separating a signal mixture into its components is impossible in general; however, many cases of interest allow for special underlying assumptions that make the problem tractable. ICA algorithms decompose into a sum of signals that "optimize independence." Several algorithms and software packages are available for ICA (4, 5).The first blind source separation in fMRI via ICA used InfoMax (1); other ICA algorithms for fMRI followed, such as FastICA (2, 5). These algorithms work well if the components have "generalized Gaussian" distributions of the form p(x) = Cexp(−α|x| γ ), with γ = 2. More general ICA algorithms that assume less about the components can separate into independent components mixtures for which Infomax and FastICA fail. Nevertheless, these 2 are the most used ICA algorithms for brain fMRI.Stochastic processes are independent if the distribution of either remains the same if the other is conditioned to any subregion of their range. Detecting deviations from independence requires large samples. In fMRI experiments, brain activity is measured in small volumetric regions or voxels v ∈ V , at times t n , n = 1, . . . , N.(fMRI measures brain function via the associated increase of oxygen-enriched blood flow. The hemodynamic response function is the flow's time profile for 1 pulse in brain activity; from a signal analysis point of view, it blurs the signal in time.) Often the voxels outnumber N by far. Thus, one often prefers to view the voxel-index v as labeling the samples over which independence is sought (spatial ICA, or SICA), rather than the t n (temporal ICA, or TICA).In the linear model for brain activation (6), the total brain activity X (t, v) is assumed to be a linear superposition of the different ongoing brain activity patterns:, where the C correspond to the brain activity patterns, and the "mixing matrix" M gives the corresponding time courses. At high signal amplitudes, saturation effects "spoil" linearity; nevertheless, the linear model is remarkably effective. We shall stick to it here.Typically, the brain function under study is turned "off" and "on" by having subjects perform a task during defined periods, punctuated by either resting states or other tasks. The activation map of interest C act (v) associated with a time course M act (t) related to the task paradigm, is then identified via a statistical analysis. When a strict paradigm is not possible, or to capture more compl...
The local field potential (LFP) is a population measure, mainly reflecting local synaptic activity. Beta oscillations (12-40 Hz) occur in motor cortical LFPs, but their functional relevance remains controversial. Power modulation studies have related beta oscillations to a "resting" motor cortex, postural maintenance, attention, sensorimotor binding and planning. Frequency modulations were largely overlooked. We here describe context-related beta frequency modulations in motor cortical LFPs. Two monkeys performed a reaching task with 2 delays. The first delay demanded attention in time in expectation of the visual spatial cue, whereas the second delay involved visuomotor integration and movement preparation. The frequency in 2 beta bands (around 20 and 30 Hz) was systematically 2-5 Hz lower during cue expectancy than during visuomotor integration and preparation. Furthermore, the frequency was directionally selective during preparation, with about 3 Hz difference between preferred and nonpreferred directions. Direction decoding with frequency gave similar accuracy as with beta power, and decoding accuracy improved significantly when combining power and frequency, suggesting that frequency might provide an additional signal for brain-machine interfaces. In conclusion, multiple beta bands coexist in motor cortex, and frequency modulations within each band are as behaviorally meaningful as power modulations, reflecting the changing behavioral context and the movement direction during preparation.
Functional magnetic resonance imaging (fMRI) exploits blood-oxygen-level-dependent (BOLD) contrasts to map neural activity associated with a variety of brain functions including sensory processing, motor control, and cognitive and emotional functions. The general linear model (GLM) approach is used to reveal task-related brain areas by searching for linear correlations between the fMRI time course and a reference model. One of the limitations of the GLM approach is the assumption that the covariance across neighbouring voxels is not informative about the cognitive function under examination. Multivoxel pattern analysis (MVPA) represents a promising technique that is currently exploited to investigate the information contained in distributed patterns of neural activity to infer the functional role of brain areas and networks. MVPA is considered as a supervised classification problem where a classifier attempts to capture the relationships between spatial pattern of fMRI activity and experimental conditions. In this paper , we review MVPA and describe the mathematical basis of the classification algorithms used for decoding fMRI signals, such as support vector machines (SVMs). In addition, we describe the workflow of processing steps required for MVPA such as feature selection, dimensionality reduction, cross-validation, and classifier performance estimation based on receiver operating characteristic (ROC) curves.
The musician's brain is considered as a good model of brain plasticity as musical training is known to modify auditory perception and related cortical organization. Here, we show that music-related modifications can also extend beyond motor and auditory processing and generalize (transfer) to speech processing. Previous studies have shown that adults and newborns can segment a continuous stream of linguistic and non-linguistic stimuli based only on probabilities of occurrence between adjacent syllables, tones or timbres. The paradigm classically used in these studies consists of a passive exposure phase followed by a testing phase. By using both behavioural and electrophysiological measures, we recently showed that adult musicians and musically trained children outperform nonmusicians in the test following brief exposure to an artificial sung language. However, the behavioural test does not allow for studying the learning process per se but rather the result of the learning. In the present study, we analyze the electrophysiological learning curves that are the ongoing brain dynamics recorded as the learning is taking place. While musicians show an inverted U shaped learning curve, nonmusicians show a linear learning curve. Analyses of Event-Related Potentials (ERPs) allow for a greater understanding of how and when musical training can improve speech segmentation. These results bring evidence of enhanced neural sensitivity to statistical regularities in musicians and support the hypothesis of positive transfer of training effect from music to sound stream segmentation in general.
Words and melodies are some of the basic elements infants are able to extract early in life from the auditory input. Whether melodic cues contained in songs can facilitate word-form extraction immediately after birth remained unexplored. Here, we provided converging neural and computational evidence of the early benefit of melodies for language acquisition. Twenty-eight neonates were tested on their ability to extract word-forms from continuous flows of sung and spoken syllabic sequences. We found different brain dynamics for sung and spoken streams and observed successful detection of word-form violations in the sung condition only. Furthermore, neonatal brain responses for sung streams predicted expressive vocabulary at 18 months as demonstrated by multiple regression and cross-validation analyses. These findings suggest that early neural individual differences in prosodic speech processing might be a good indicator of later language outcomes and could be considered as a relevant factor in the development of infants’ language skills.
Recognizing who is speaking is a cognitive ability characterized by considerable individual differences, which could relate to the inter-individual variability observed in voice-elicited BOLD activity. Since voice perception is sustained by a complex brain network involving temporal voice areas (TVAs) and, even if less consistently, extra-temporal regions such as frontal cortices, functional connectivity (FC) during an fMRI voice localizer (passive listening of voices vs non-voices) has been computed within twelve temporal and frontal voice-sensitive regions (“voice patches”) individually defined for each subject (N = 90) to account for inter-individual variability. Results revealed that voice patches were positively co-activated during voice listening and that they were characterized by different FC pattern depending on the location (anterior/posterior) and the hemisphere. Importantly, FC between right frontal and temporal voice patches was behaviorally relevant: FC significantly increased with voice recognition abilities as measured in a voice recognition test performed outside the scanner. Hence, this study highlights the importance of frontal regions in voice perception and it supports the idea that looking at FC between stimulus-specific and higher-order frontal regions can help understanding individual differences in processing social stimuli such as voices.
Whether emotions carried by voice and music are processed by the brain using similar mechanisms has long been investigated. Yet neuroimaging studies do not provide a clear picture, mainly due to lack of control over stimuli. Here, we report a functional magnetic resonance imaging (fMRI) study using comparable stimulus material in the voice and music domains-the Montreal Affective Voices and the Musical Emotional Bursts-which include nonverbal short bursts of happiness, fear, sadness, and neutral expressions. We use a multivariate emotion-classification fMRI analysis involving cross-timbre classification as a means of comparing the neural mechanisms involved in processing emotional information in the two domains. We find, for affective stimuli in the violin, clarinet, or voice timbres, that local fMRI patterns in the bilateral auditory cortex and upper premotor regions support above-chance emotion classification when training and testing sets are performed within the same timbre category. More importantly, classifier performance generalized well across timbre in cross-classifying schemes, albeit with a slight accuracy drop when crossing the voice-music boundary, providing evidence for a shared neural code for processing musical and vocal emotions, with possibly a cost for the voice due to its evolutionary significance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.