Premutation alleles of the fragile X mental retardation 1 gene (FMR1) are associated with the risk of developing fragile X-associated tremor/ataxia syndrome (FXTAS), a late-onset neurodegenerative disorder that involves neuropsychiatric problems and executive and memory deficits. Although abnormal elevation of FMR1 mRNA has been proposed to underlie these deficits, it remains unknown which brain regions are affected by the disease process of FXTAS and genetic molecular mechanisms associated with the FMR1 premutation. This study used functional magnetic resonance imaging (fMRI) to identify deficient neural substrates responsible for altered executive and memory functions in some FMR1 premutation individuals. We measured fMRI BOLD signals during the performance of verbal working memory from 15 premutation carriers affected by FXTAS (PFX+), 15 premutation carriers unaffected by FXTAS (PFX−), and 12 matched healthy control individuals (HC). We also examined correlation between brain activation and FMR1 molecular variables (CGG repeat size and mRNA levels) in premutation carriers. Compared with HC, PFX+ and PFX− showed reduced activation in the right ventral inferior frontal cortex and left premotor/dorsal inferior frontal cortex. Reduced activation specific to PFX+ was found in the right premotor/dorsal inferior frontal cortex. Regression analysis combining the two premutation groups demonstrated significant negative correlation between the right ventral inferior frontal cortex activity and the levels of FMR1 mRNA after excluding the effect of disease severity of FXTAS. These results indicate altered prefrontal cortex activity that may underline executive and memory deficits affecting some individuals with FMR1 premutation including FXTAS patients.
According to the object-based account of attention, multiple objects coexist in short-term memory (STM), and we can selectively attend to a particular object of interest. Although there is evidence that attention can be directed to visual object representations, the assumption that attention can be oriented to sound object representations has yet to be validated. Here, we used a delayed match-to-sample task to examine whether orienting attention to sound object representations influences change detection within auditory scenes consisting of 3 concurrent sounds, each occurring at a different location. On some trials, the 2 scenes were identical; in the remaining trials, the locations of 2 sounds were switched. In a control experiment, we first identified auditory scenes, in which the 3 sounds were unambiguously segregated, for the subsequent experiments. In 2 experiments, we showed that orienting attention to a sound object representation during memory retention (via a retro-cue) enhanced performance relative to uncued trials, up to 4 s of memory retention. Our study shows that complex auditory scenes composed of cooccurring sound sources are quickly parsed into sound object representations--which are then available for top-down selective attention. Here, we demonstrate that attention can be guided toward 1 of those representations, thereby attenuating change deafness. Furthermore, the effects of retro-cues in audition extend analogous findings in the visual domain, thereby suggesting that orienting attention to an object within visual or auditory STM may follow similar processing principles.
Audiovisual (AV) integration is essential for speech comprehension, especially in adverse listening situations. Divergent, but not mutually exclusive, theories have been proposed to explain the neural mechanisms underlying AV integration. One theory advocates that this process occurs via interactions between the auditory and visual cortices, as opposed to fusion of AV percepts in a multisensory integrator. Building upon this idea, we proposed that AV integration in spoken language reflects visually induced weighting of phonetic representations at the auditory cortex. EEG was recorded while male and female human subjects watched and listened to videos of a speaker uttering consonant vowel (CV) syllables /ba/ and /fa/, presented in Auditory-only, AV congruent or incongruent contexts. Subjects reported whether they heard /ba/ or /fa/. We hypothesized that vision alters phonetic encoding by dynamically weighting which phonetic representation in the auditory cortex is strengthened or weakened. That is, when subjects are presented with visual /fa/ and acoustic /ba/ and hear /fa/ (), the visual input strengthens the weighting of the phone /f/ representation. When subjects are presented with visual /ba/ and acoustic /fa/ and hear /ba/ (), the visual input weakens the weighting of the phone /f/ representation. Indeed, we found an enlarged N1 auditory evoked potential when subjects perceived , and a reduced N1 when they perceived, mirroring the N1 behavior for /ba/ and /fa/ in Auditory-only settings. These effects were especially pronounced in individuals with more robust illusory perception. These findings provide evidence that visual speech modifies phonetic encoding at the auditory cortex. The current study presents evidence that audiovisual integration in spoken language occurs when one modality (vision) acts on representations of a second modality (audition). Using the McGurk illusion, we show that visual context primes phonetic representations at the auditory cortex, altering the auditory percept, evidenced by changes in the N1 auditory evoked potential. This finding reinforces the theory that audiovisual integration occurs via visual networks influencing phonetic representations in the auditory cortex. We believe that this will lead to the generation of new hypotheses regarding cross-modal mapping, particularly whether it occurs via direct or indirect routes (e.g., via a multisensory mediator).
Sounds are ephemeral. Thus, coherent auditory perception depends on "hearing" back in time: retrospectively attending that which was lost externally but preserved in short-term memory (STM). Current theories of auditory attention assume that sound features are integrated into a perceptual object, that multiple objects can coexist in STM, and that attention can be deployed to an object in STM. Recording electroencephalography from humans, we tested these assumptions, elucidating feature-general and feature-specific neural correlates of auditory attention to STM. Alpha/beta oscillations and frontal and posterior event-related potentials indexed featuregeneral top-down attentional control to one of several coexisting auditory representations in STM. Particularly, task performance during attentional orienting was correlated with alpha/low-beta desynchronization (i.e., power suppression). However, attention to one feature could occur without simultaneous processing of the second feature of the representation. Therefore, auditory attention to memory relies on both feature-specific and feature-general neural dynamics.
Shahin AJ, Trainor LJ, Roberts LE, Backer KC, Miller LM. Development of auditory phase-locked activity for music sounds.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.