Life or death in hostile environments depends crucially on one's ability to detect and gate novel sounds to awareness, such as that of a twig cracking under the paw of a stalking predator in a noisy jungle. Two distinct auditory cortex processes have been thought to underlie this phenomenon: (i) attenuation of the so-called N1 response with repeated stimulation and (ii) elicitation of a mismatch negativity response (MMN) by changes in repetitive aspects of auditory stimulation. This division has been based on previous studies suggesting that, unlike for the N1, repetitive ''standard'' stimuli preceding a physically different ''novel'' stimulus constitute a prerequisite to MMN elicitation, and that the source loci of MMN and N1 are different. Contradicting these findings, our combined electromagnetic, hemodynamic, and psychophysical data indicate that the MMN is generated as a result of differential adaptation of anterior and posterior auditory cortex N1 sources by preceding auditory stimulation. Early (Ϸ85 ms) neural activity within posterior auditory cortex is adapted as sound novelty decreases. This alters the center of gravity of electromagnetic N1 source activity, creating an illusory difference between N1 and MMN source loci when estimated by using equivalent current dipole fits. Further, our electroencephalography data show a robust MMN after a single standard event when the interval between two consecutive novel sounds is kept invariant. Our converging findings suggest that transient adaptation of feature-specific neurons within human posterior auditory cortex filters superfluous sounds from entering one's awareness.
Human neuroimaging studies suggest that localization and identification of relevant auditory objects are accomplished via parallel parietal-to-lateral-prefrontal ''where'' and anterior-temporal-toinferior-frontal ''what'' pathways, respectively. Using combined hemodynamic (functional MRI) and electromagnetic (magnetoencephalography) measurements, we investigated whether such dual pathways exist already in the human nonprimary auditory cortex, as suggested by animal models, and whether selective attention facilitates sound localization and identification by modulating these pathways in a feature-specific fashion. We found a double dissociation in response adaptation to sound pairs with phonetic vs. spatial sound changes, demonstrating that the human nonprimary auditory cortex indeed processes speech-sound identity
Increased spatiotemporal resolution in MRI can be achieved by the use of parallel acquisition strategies, which simultaneously sample reduced k-space data using the information from multiple receivers to reconstruct full-FOV images. The price for the increased spatiotemporal resolution in parallel MRI is the degradation of the signal-to-noise ratio (SNR) in the final reconstructed images. Part of the SNR reduction results when the spatially correlated nature of the information from the multiple receivers destabilizes the matrix inversion used in the reconstruction of the full-FOV image. In this work, a reconstruction algorithm based on Tikhonov regularization is presented that reduces the SNR loss due to geometric correlations in the spatial information from the array coil elements. Reference scans are utilized as a priori information about the final reconstructed image to provide regularized estimates for the reconstruction using the L-curve technique. This automatic regularization method reduces the average g-factors in phantom images from a two-channel array from 1.47 to 0.80 in twofold sensitivity encoding (SENSE) acceleration. In vivo anatomical images from an eight-channel system show an averaged gfactor reduction of Key words: SENSE; regularization; g-factor; parallel MRI; LcurveThe use of multiple receivers in MRI can be exploited to enhance spatiotemporal resolution by reducing the number of k-space acquisitions. The folded image that would result from conventional reconstruction is avoided by the use of spatial information from multiple coils. Several methods for using this information have been proposed, including the k-space-based simultaneous acquisition of spatial harmonics (SMASH) method (1,2) and the image domain-based sensitivity encoding (SENSE) approach (3). By reducing sampling time, these parallel MRI techniques can be used to reduce image distortion in echo-planar imaging (EPI) (4) or diminish acoustic noise by lowering gradient switching rates (5). However, these advantages come at the cost of a reduced signal-to-noise ratio (SNR). The reduction in SNR stems from two factors: the reduced number of data samples, and the instability in reconstruction due to correlations in the spatial information as determined by the geometrical arrangement of the array coil. The first is the inevitable result of reducing the number of samples. The second might be affected by optimizing coil geometry (6,7) or improving the stability of the reconstruction algorithm. The increased noise originating from correlated spatial information from the array elements can be estimated based on knowledge of the array geometry, and is quantified by the geometric factor (g-factor) map (3).The reconstruction of parallel MRI can be formulated as linear equations (8) that must be inverted to obtain an unfolded image from the reduced k-space data set. If the matrix is well conditioned, the inversion can be achieved with minimal amplification of noise. While the encoding matrix can still be inverted even if it is nearly singula...
Distributed source models of magnetoencephalographic (MEG) and electroencephalographic (EEG) data employ dense distributions of current sources in a volume or on a surface. Previously, anatomical magnetic resonance imaging (MRI) data have been used to constrain locations and orientations based on cortical geometry extracted from anatomical MRI data. We extended this approach by first calculating cortical patch statistics (CPS), which for each patch corresponding to a current source location on the cortex comprise the area of the patch, the average normal direction, and the average deviation of the surface normal from its average. The patch areas were then incorporated in the forward model to yield estimates of the surface current density instead of dipole amplitudes at the current locations. The surface normal data were employed in a loose orientation constraint (LOC), which allows some variation of the current direction from the average normal. We employed this approach both in the l(2) minimum-norm estimates (MNE) and in the more focal l(1) minimum-norm solutions, the minimum-current estimate (MCE). Simulations in auditory and somatosensory areas with current dipoles and 10- or 20-mm diameter cortical patches as test sources showed that applying the LOC can increase localization accuracy. We also applied the method to in vivo auditory and somatosensory data.
Behavioural and functional imaging studies have demonstrated that lexical knowledge influences the categorization of perceptually ambiguous speech sounds. However, methodological and inferential constraints have so far been unable to resolve the question of whether this interaction takes the form of direct top-down influences on perceptual processing, or feedforward convergence during a decision process. We examined top-down lexical influences on the categorization of segments in a /s/−/∫/ continuum presented in different lexical contexts to produce a robust Ganong effect. Using integrated MEG/EEG and MRI data we found that, within a network identified by 40Hz gamma phase locking, activation in the supramarginal gyrus associated with wordform representation influences phonetic processing in the posterior superior temporal gyrus during a period of time associated with lexical processing. This result provides direct evidence that lexical processes influence lower level phonetic perception, and demonstrates the potential value of combining Granger causality analyses and high spatiotemporal resolution multimodal imaging data to explore the functional architecture of cognition.Results from a variety of paradigms show that the categorization of perceptually ambiguous speech sounds is affected by the words they appear in (Ganong, 1980;Warren, 1970;Samuel and Pitt, 2003). In the Ganong effect, listeners tend to identify ambiguous speech sounds such that the sound and its context are interpreted as a word (Ganong, 1980). For example, a speech sound that is acoustically intermediate between [g] and [k] tends to be identified as a [g] in the context _ift, and as a [k] in the context of _iss. Ganong (1980) recognized this effect as evidence of interaction between lexical and phonetic processes and was notably agnostic about whether the interaction occurred at the level of perception or after perception during a phonological Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain. NIH Public Access NIH-PA Author ManuscriptNIH-PA Author Manuscript NIH-PA Author Manuscript decision process.. Lexical effects on speech categorization have been argued to reflect a true top-down perceptual effect (Samuel and Pitt, 2003). However, the effects may also be explained as a post-perceptual effect within a purely feedforward model (Norris et al., 2000).Using fMRI, Myers and Blumstein (2008) demonstrated that lexical influence on phonetic judgments is accompanied by an increase in activation in the superior temporal gyrus (STG) that is bilateral, but somewhat stronger in the left hemisphere. Se...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.