To advance our understanding of the biological basis of speech-in-noise perception, we investigated the effects of background noise on both subcortical- and cortical-evoked responses, and the relationships between them, in normal hearing young adults. The addition of background noise modulated subcortical and cortical response morphology. In noise, subcortical responses were later, smaller in amplitude and demonstrated decreased neural precision in encoding the speech sound. Cortical responses were also delayed by noise, yet the amplitudes of the major peaks (N1, P2) were affected differently, with N1 increasing and P2 decreasing. Relationships between neural measures and speech-in-noise ability were identified, with earlier subcortical responses, higher subcortical response fidelity and greater cortical N1 response magnitude all relating to better speech-in-noise perception. Furthermore, it was only with the addition of background noise that relationships between subcortical and cortical encoding of speech and the behavioral measures of speech in noise emerged. Results illustrate that human brainstem responses and N1 cortical response amplitude reflect coordinated processes with regards to the perception of speech in noise, thereby acting as a functional index of speech-in-noise perception.
The musical priming paradigm has shown facilitated processing for tonally related over less-related targets. However, the congruence between tonal relatedness and the psychoacoustical properties of music challenges cognitive interpretations of the involved processes. Our goal was to show that cognitive expectations (based on listeners' tonal knowledge) elicit tonal priming in melodies independently of sensory components (e.g., spectral overlap). A first priming experiment minimized sensory components by manipulating tonal relatedness with a single note change in the melodies. Processing was facilitated for related over less-related target tones, but an auditory short-term memory model succeeded in simulating this effect, thus suggesting a sensory-based explanation. When the same melodies were played with pure tones (instead of piano tones), the sensory model failed to differentiate between related and less-related targets, while listeners' data continued to show a tonal relatedness effect (Experiment 2). The tonal priming effect observed here thus provides strong evidence for the influence of listeners' tonal knowledge on music processing. The overall findings point out the need for controlled musical material (and notably beyond tone repetition) to study cognitive components in music perception.
The neural mechanisms of pitch coding have been debated for more than a century. The two main mechanisms are coding based on the profiles of neural firing rates across auditory nerve fibers with different characteristic frequencies (place-rate coding), and coding based on the phase-locked temporal pattern of neural firing (temporal coding). Phase locking precision can be partly assessed by recording the frequency-following response (FFR), a scalp-recorded electrophysiological response that reflects synchronous activity in subcortical neurons. Although features of the FFR have been widely used as indices of pitch coding acuity, only a handful of studies have directly investigated the relation between the FFR and behavioral pitch judgments. Furthermore, the contribution of degraded neural synchrony (as indexed by the FFR) to the pitch perception impairments of older listeners and those with hearing loss is not well known. Here, the relation between the FFR and pure-tone frequency discrimination was investigated in listeners with a wide range of ages and absolute thresholds, to assess the respective contributions of subcortical neural synchrony and other age-related and hearing loss-related mechanisms to frequency discrimination performance. FFR measures of neural synchrony and absolute thresholds independently contributed to frequency discrimination performance. Age alone, i.e., once the effect of subcortical neural synchrony measures or absolute thresholds had been partialed out, did not contribute to frequency discrimination. Overall, the results suggest that frequency discrimination of pure tones may depend both on phase locking precision and on separate mechanisms affected in hearing loss.
THE MUSICAL PRIMING PARADIGM ALLOWS FOR INVESTIGA-TION of listeners' expectations based on their implicit knowledge of tonal stability. To date, priming data are limited to reports of facilitated processing for tonic over nontonic events. The special status of the tonic as a cognitive reference point brings into question the subtlety of listeners' tonal knowledge: Is the facilitated processing observed in priming studies limited to tonic events, or is tone processing influenced by subtler tonal contrasts? The present study investigated tonal priming for mediants (the third scale degree) over leading tones (the seventh scale degree) presented in melodic contexts. Experiment 1 used a timbre discrimination task and Experiment 2 an intonation task. Facilitated processing was observed for the more tonally stable mediants over the less stable leading tones, thus showing that priming effects are not limited to pairs of tonal degrees including the tonic. This finding emphasizes the subtlety of nonexpert listeners' tonal knowledge.
The present study investigated the ERP correlates of the influence of tonal expectations on pitch processing. Participants performed a pitch discrimination task between penultimate and final tones of melodies. These last two tones were a repetition of the same musical note, but penultimate tones were always in tune whereas final tones were slightly out of tune in half of the trials. The pitch discrimination task allowed us to investigate the influence of tonal expectations in attentive listening and, for penultimate tones, without being confounded by decisional processes (occurring on final tones). Tonal expectations were manipulated by a tone change in the first half of the melodies that changed their tonality, hence changing the tonal expectedness of penultimate and final tones without modifying them acoustically. Manipulating tonal expectations with minimal acoustic changes allowed us to focus on the cognitive expectations based on listeners' knowledge of tonal structures. For penultimate tones, tonal expectations modulated processing within the first 100 msec after onset resulting in an Nb/P1 complex that differed in amplitude between tonally related and less related conditions. For final tones, out-of-tune tones elicited an N2/P3 complex and, on in-tune tones only, tonal manipulation elicited an ERAN/RATN-like negativity overlapping with the N2. Our results suggest that cognitive tonal expectations can influence pitch perception at several steps of processing, starting with early attentional selection of pitch.
Musical priming studies have shown that musical event processing is facihtated for the tonally related, stable tonic (chord, tone) in comparison with less-related, less stable events. However, target events have always been in the final position of the musical sequences, position at which the tonic is the most expected event as it brings closure. Priming data thus contain a confound between tonal stabüity and end-sequence wrap-up processes, comparable with those reported for sentence processing. To investigate musical expectations without this confound, our study omitted the advantage of the tonic linked to the final position and placed related and less-related targets at various positions within 8-chord sequences. To indicate to-be-processed targets, visual information was synchronized with the presentation of each chord. Data showed higher accuracy and faster correct response times for stable tonic over less-stable dominant targets. The here introduced musical priming paradigm contributes to our understanding of listeners' knowledge about tonal hierarchy and provides a new tool for testing musical integration and event processing in musical materials.
It has long been known that some listeners experience hearing difficulties out of proportion with their audiometric losses. Notably, some older adults as well as auditory neuropathy patients have temporal-processing and speech-in-noise intelligibility deficits not accountable for by elevated audiometric thresholds. The study of these hearing deficits has been revitalized by recent studies that show that auditory deafferentation comes with aging and can occur even in the absence of an audiometric loss. The present study builds on the stochastic undersampling principle proposed by Lopez-Poveda and Barrios (2013) to account for the perceptual effects of auditory deafferentation. Auditory threshold/duration functions were measured for broadband noises that were stochastically undersampled to various different degrees. Stimuli with and without undersampling were equated for overall energy in order to focus on the changes that undersampling elicited on the stimulus waveforms, and not on its effects on the overall stimulus energy. Stochastic undersampling impaired the detection of short sounds (<20 ms). The detection of long sounds (>50 ms) did not change or improved, depending on the degree of undersampling. The results for short sounds show that stochastic undersampling, and hence presumably deafferentation, can account for the steeper threshold/duration functions observed in auditory neuropathy patients and older adults with (near) normal audiometry. This suggests that deafferentation might be diagnosed using pure-tone audiometry with short tones. It further suggests that the auditory system of audiometrically normal older listeners might not be “slower than normal”, as is commonly thought, but simply less well afferented. Finally, the results for both short and long sounds support the probabilistic theories of detectability that challenge the idea that auditory threshold occurs by integration of sound energy over time.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.