2021
DOI: 10.3389/fnins.2021.747303
|View full text |Cite
|
Sign up to set email alerts
|

Rapid Enhancement of Subcortical Neural Responses to Sine-Wave Speech

Abstract: The efferent auditory nervous system may be a potent force in shaping how the brain responds to behaviorally significant sounds. Previous human experiments using the frequency following response (FFR) have shown efferent-induced modulation of subcortical auditory function online and over short- and long-term time scales; however, a contemporary understanding of FFR generation presents new questions about whether previous effects were constrained solely to the auditory subcortex. The present experiment used sin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
19
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(20 citation statements)
references
References 116 publications
(139 reference statements)
1
19
0
Order By: Relevance
“…To further assess the behavioral relevance of α -FFR modulations to SIN listening, we investigated vowel decoding of FFRs at low vs. high α states. Similar ML approaches have been applied to speech-evoked FFRs to decode stimulus classes, e.g., Mandarin lexical tones (Llanos, Xie and Chandrasekaran, 2017; Xie et al, 2019) and speech tokens (Xie, Girshick, Dollár, Tu and He, 2017; Cheng, Xu, Gold and Smith, 2021). In these studies, decoding performance in correctly classifying FFRs is used as an objective measure of speech discrimination.…”
Section: Discussionmentioning
confidence: 99%
“…To further assess the behavioral relevance of α -FFR modulations to SIN listening, we investigated vowel decoding of FFRs at low vs. high α states. Similar ML approaches have been applied to speech-evoked FFRs to decode stimulus classes, e.g., Mandarin lexical tones (Llanos, Xie and Chandrasekaran, 2017; Xie et al, 2019) and speech tokens (Xie, Girshick, Dollár, Tu and He, 2017; Cheng, Xu, Gold and Smith, 2021). In these studies, decoding performance in correctly classifying FFRs is used as an objective measure of speech discrimination.…”
Section: Discussionmentioning
confidence: 99%
“…Tk4) (Carter et al, 2022). Previous work has also demonstrated that changes in perception can drive enhancements of the FFR (Cheng et al, 2021), suggesting speech processing is influenced by predictions of the percept. Our response-to-response correlations and neural decoding results support perceptual encoding in the FFR.…”
Section: Discussionmentioning
confidence: 93%
“…Such findings could be explained by long term, experience-dependent plasticity (Krishnan et al, 2012). This evidence is further bolstered by findings of categorization-training studies that show once individuals learn to identify novel speech stimuli their FFR are enhanced relative to more novice listening states (Cheng et al, 2021;Reetzke et al, 2018).…”
Section: Introductionmentioning
confidence: 92%
“…The abovementioned rationale motivated a recent study by Cheng and colleagues 9 in which subcortical AEPs were evoked by three acoustically sparse speech stimuli known as sine wave speech (SWS). SWS replaces the dynamic formant structure of a speech signal with two or three time-varying sinusoids, and all other speech content is discarded ( Fig.…”
Section: Vignette 1: Examining Training Context and Learning Effects ...mentioning
confidence: 99%
“…8 Do we want to know whether auditory training has positively impacted the function of the auditory nervous system? 9 Do we want to use OAEs and ABRs to explore hidden structures in our datasets that allow us to identify which patients are at risk for noise-induced hearing loss, synaptopathy, and/or tinnitus? 10 Do we want hearing aids or cochlear implants to learn which acoustic signals are important to the user, how to function in different environments, and to be "cognitively steered" via a brain-computer interface?…”
Section: Conceptual Overview Of ML Approachesmentioning
confidence: 99%