It has been suggested that visual language is maladaptive for hearing restoration with a cochlear implant (CI) due to cross-modal recruitment of auditory brain regions. Rehabilitative guidelines therefore discourage the use of visual language. However, neuroscientific understanding of cross-modal plasticity following cochlear implantation has been restricted due to incompatibility between established neuroimaging techniques and the surgically implanted electronic and magnetic components of the CI. As a solution to this problem, here we used functional near-infrared spectroscopy (fNIRS), a noninvasive optical neuroimaging method that is fully compatible with a CI and safe for repeated testing. The aim of this study was to examine cross-modal activation of auditory brain regions by visual speech from before to after implantation and its relation to CI success. Using fNIRS, we examined activation of superior temporal cortex to visual speech in the same profoundly deaf adults both before and 6 mo after implantation. Patients' ability to understand auditory speech with their CI was also measured following 6 mo of CI use. Contrary to existing theory, the results demonstrate that increased cross-modal activation of auditory brain regions by visual speech from before to after implantation is associated with better speech understanding with a CI. Furthermore, activation of auditory cortex by visual and auditory speech developed in synchrony after implantation. Together these findings suggest that cross-modal plasticity by visual speech does not exert previously assumed maladaptive effects on CI success, but instead provides adaptive benefits to the restoration of hearing after implantation through an audiovisual mechanism. cochlear implantation | cross-modal plasticity | functional near-infrared spectroscopy | superior temporal cortex | visual speech
Functional near-infrared spectroscopy (fNIRS) is a silent, non-invasive neuroimaging technique that is potentially well suited to auditory research. However, the reliability of auditory-evoked activation measured using fNIRS is largely unknown. The present study investigated the test-retest reliability of speech-evoked fNIRS responses in normally-hearing adults. Seventeen participants underwent fNIRS imaging in two sessions separated by three months. In a block design, participants were presented with auditory speech, visual speech (silent speechreading), and audiovisual speech conditions. Optode arrays were placed bilaterally over the temporal lobes, targeting auditory brain regions. A range of established metrics was used to quantify the reproducibility of cortical activation patterns, as well as the amplitude and time course of the haemodynamic response within predefined regions of interest. The use of a signal processing algorithm designed to reduce the influence of systemic physiological signals was found to be crucial to achieving reliable detection of significant activation at the group level. For auditory speech (with or without visual cues), reliability was good to excellent at the group level, but highly variable among individuals. Temporal-lobe activation in response to visual speech was less reliable, especially in the right hemisphere. Consistent with previous reports, fNIRS reliability was improved by averaging across a small number of channels overlying a cortical region of interest. Overall, the present results confirm that fNIRS can measure speech-evoked auditory responses in adults that are highly reliable at the group level, and indicate that signal processing to reduce physiological noise may substantially improve the reliability of fNIRS measurements.
While many individuals can benefit substantially from cochlear implantation, the ability to perceive and understand auditory speech with a cochlear implant (CI) remains highly variable amongst adult recipients. Importantly, auditory performance with a CI cannot be reliably predicted based solely on routinely obtained information regarding clinical characteristics of the CI candidate. This review argues that central factors, notably cortical function and plasticity, should also be considered as important contributors to the observed individual variability in CI outcome. Superior temporal cortex (STC), including auditory association areas, plays a crucial role in the processing of auditory and visual speech information. The current review considers evidence of cortical plasticity within bilateral STC, and how these effects may explain variability in CI outcome. Furthermore, evidence of audio-visual interactions in temporal and occipital cortices is examined, and relation to CI outcome is discussed. To date, longitudinal examination of changes in cortical function and plasticity over the period of rehabilitation with a CI has been restricted by methodological challenges. The application of functional near-infrared spectroscopy (fNIRS) in studying cortical function in CI users is becoming increasingly recognised as a potential solution to these problems. Here we suggest that fNIRS offers a powerful neuroimaging tool to elucidate the relationship between audio-visual interactions, cortical plasticity during deafness and following cochlear implantation, and individual variability in auditory performance with a CI.
Currently, it is not possible to accurately predict how well a deaf individual will be able to understand speech when hearing is (re)introduced via a cochlear implant. Differences in brain organisation following deafness are thought to contribute to variability in speech understanding with a cochlear implant and may offer unique insights that could help to more reliably predict outcomes. An emerging optical neuroimaging technique, functional near-infrared spectroscopy (fNIRS), was used to determine whether a pre-operative measure of brain activation could explain variability in cochlear implant (CI) outcomes and offer additional prognostic value above that provided by known clinical characteristics. Cross-modal activation to visual speech was measured in bilateral superior temporal cortex of pre- and post-lingually deaf adults before cochlear implantation. Behavioural measures of auditory speech understanding were obtained in the same individuals following 6 months of cochlear implant use. The results showed that stronger pre-operative cross-modal activation of auditory brain regions by visual speech was predictive of poorer auditory speech understanding after implantation. Further investigation suggested that this relationship may have been driven primarily by the inclusion of, and group differences between, pre- and post-lingually deaf individuals. Nonetheless, pre-operative cortical imaging provided additional prognostic value above that of influential clinical characteristics, including the age-at-onset and duration of auditory deprivation, suggesting that objectively assessing the physiological status of the brain using fNIRS imaging pre-operatively may support more accurate prediction of individual CI outcomes. Whilst activation of auditory brain regions by visual speech prior to implantation was related to the CI user’s clinical history of deafness, activation to visual speech did not relate to the future ability of these brain regions to respond to auditory speech stimulation with a CI. Greater pre-operative activation of left superior temporal cortex by visual speech was associated with enhanced speechreading abilities, suggesting that visual speech processing may help to maintain left temporal lobe specialisation for language processing during periods of profound deafness.
Cochlear implants (CIs) are the most successful treatment for severe-to-profound deafness in children. However, speech outcomes with a CI often lag behind those of normally-hearing children. Some authors have attributed these deficits to the takeover of the auditory temporal cortex by vision following deafness, which has prompted some clinicians to discourage the rehabilitation of pediatric CI recipients using visual speech. We studied this cross-modal activity in the temporal cortex, along with responses to auditory speech and non-speech stimuli, in experienced CI users and normallyhearing controls of school-age, using functional near-infrared spectroscopy. Strikingly, CI users displayed significantly greater cortical responses to visual speech, compared with controls. Importantly, in the same regions, the processing of auditory speech, compared with non-speech stimuli, did not significantly differ between the groups. This suggests that visual and auditory speech are processed synergistically in the temporal cortex of children with CIs, and they should be encouraged, rather than discouraged, to use visual speech.
Evidence using well-established imaging techniques, such as functional magnetic resonance imaging and electrocorticography, suggest that speech-specific cortical responses can be functionally localised by contrasting speech responses with an auditory baseline stimulus, such as time-reversed (TR) speech or signal-correlated noise (SCN). Furthermore, these studies suggest that SCN is a more effective baseline than TR speech. Functional near-infrared spectroscopy (fNIRS) is a relatively novel, optically-based imaging technique with features that make it ideal for investigating speech and language function in paediatric populations. However, it is not known which baseline is best at isolating speech activation when imaging using fNIRS. We presented normal speech, TR speech and SCN in an event-related format to 25 normally-hearing children aged 6–12 years. Brain activity was measured across frontal and temporal brain areas in both cerebral hemispheres whilst children passively listened to the auditory stimuli. In all three conditions, significant activation was observed bilaterally in channels targeting superior temporal regions when stimuli were contrasted against silence. Unlike previous findings in infants, we found no significant activation in the region of interest over superior temporal cortex in school-age children when normal speech was contrasted against either TR speech or SCN. Although no statistically significant lateralisation effects were observed in the region of interest, a left-sided channel targeting posterior temporal regions showed significant activity in response to normal speech only, and was investigated further. Significantly greater activation was observed in this left posterior channel compared to the corresponding channel on the right side under the normal speech vs SCN contrast only. Our findings suggest that neither TR speech nor SCN are suitable auditory baselines for functionally isolating speech-specific processing in an experimental set up involving fNIRS with 6–12 year old children.
Whilst functional neuroimaging has been used to investigate cortical processing of degraded speech in adults, much less is known about how these signals are processed in children. An enhanced understanding of cortical correlates of poor speech perception in children would be highly valuable to oral communication applications, including hearing devices. We utilised vocoded speech stimuli to investigate brain responses to degraded speech in 29 normally hearing children aged 6–12 years. Intelligibility of the speech stimuli was altered in two ways by (i) reducing the number of spectral channels and (ii) reducing the amplitude modulation depth of the signal. A total of five different noise-vocoded conditions (with zero, partial or high intelligibility) were presented in an event-related format whilst participants underwent functional near-infrared spectroscopy (fNIRS) neuroimaging. Participants completed a word recognition task during imaging, as well as a separate behavioural speech perception assessment. fNIRS recordings revealed statistically significant sensitivity to stimulus intelligibility across several brain regions. More intelligible stimuli elicited stronger responses in temporal regions, predominantly within the left hemisphere, while right inferior parietal regions showed an opposite, negative relationship. Although there was some evidence that partially intelligible stimuli elicited the strongest responses in the left inferior frontal cortex, a region previous studies have suggested is associated with effortful listening in adults, this effect did not reach statistical significance. These results further our understanding of cortical mechanisms underlying successful speech perception in children. Furthermore, fNIRS holds promise as a clinical technique to help assess speech intelligibility in paediatric populations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.