Normal-hearing (NH) listeners maintain robust speech understanding in modulated noise by "glimpsing" portions of speech from a partially masked waveform-a phenomenon known as masking release (MR). Cochlear implant (CI) users, however, generally lack such resiliency. In previous studies, temporal masking of speech by noise occurred randomly, obscuring to what degree MR is attributable to the temporal overlap of speech and masker. In the present study, masker conditions were constructed to either promote (þMR) or suppress (ÀMR) masking release by controlling the degree of temporal overlap. Sentence recognition was measured in 14 CI subjects and 22 young-adult NH subjects. Normal-hearing subjects showed large amounts of masking release in the þMR condition and a marked difference between þMR and ÀMR conditions. In contrast, CI subjects demonstrated less effect of MR overall, and some displayed modulation interference as reflected by poorer performance in modulated maskers. These results suggest that the poor performance of typical CI users in noise might be accounted for by factors that extend beyond peripheral masking, such as reduced segmental boundaries between syllables or words. Encouragingly, the best CI users tested here could take advantage of masker fluctuations to better segregate the speech from the background.
This article introduces AUX (AUditory syntaX), a scripting syntax specifically designed to describe auditory signals and processing, to the members of the behavioral research community. The syntax is based on descriptive function names and intuitive operators suitable for researchers and students without substantial training in programming, who wish to generate and examine sound signals using a written script. In this article, the essence of AUX is discussed and practical examples of AUX scripts specifying various signals are illustrated. Additionally, two accompanying Windows-based programs and development libraries are described. AUX Viewer is a program that generates, visualizes, and plays sounds specified in AUX. AUX Viewer can also be used for class demonstrations or presentations. Another program, Psycon, allows a wide range of sound signals to be used as stimuli in common psychophysical testing paradigms, such as the adaptive procedure, the method of constant stimuli, and the method of adjustment. AUX Library is also provided, so that researchers can develop their own programs utilizing AUX. The philosophical basis of AUX is to separate signal generation from the user interface needed for experiments. AUX scripts are portable and reusable; they can be shared by other researchers, regardless of differences in actual AUX-based programs, and reused for future experiments. In short, the use of AUX can be potentially beneficial to all members of the research community—both those with programming backgrounds and those without.
The present study aimed to examine the effect of electrode configuration, specifically monopolar (MP) or bipolar (BP) stimulation, on place pitch discrimination in cochlear implants (CIs). Twelve subjects implanted with the Nucleus Freedom device were presented with various pairs of stimulation across the electrode array, with varying degrees of distance between stimulation sites, and asked to judge the higher of the two in pitch. Each pair was presented either in the same mode or in different modes of stimulation for the within-mode or across-mode condition, respectively, at least 20 times. The result of the within-mode condition revealed that subjects, on average, were able to discriminate pitches significantly better in MP than in BP, with the sensitivity index (d 0 ) for adjacent channels of 1.2 for MP and 0.8 for BP. The result of the across-mode condition revealed that while individual variability existed, there was a strong tendency for CI subjects to perceive a higher pitch in BP stimulation than in MP for a similar site of stimulation. In other words, an MP channel needed to be shifted in a basal direction by as much as two electrodes on average to elicit a pitch comparable to that of a BP channel.
Normal-hearing listeners show masking release, or better speech understanding in a fluctuating-amplitude masker than in a steady-amplitude masker, but most cochlear implant (CI) users consistently show little or no masking release even in artificial conditions where masking release is highly anticipated. The current study examined the hypothesis that the reduced or absent masking release in CI users is due to disruption of linguistic segmentation cues. Eleven CI subjects completed a sentence keyword identification task in a steady masker and a fluctuating masker with dips timed to increase speech availability. Lexical boundary errors in their responses were categorized as consistent or inconsistent with the use of the metrical segmentation strategy (MSS). Subjects who demonstrated masking release showed greater adherence to the MSS in the fluctuating masker compared to subjects who showed little or no masking release, while both groups used metrical segmentation cues similarly in the steady masker. Based on the characteristics of the segmentation cues, the results are interpreted as evidence that CI listeners showing little or no masking release are not reliably segregating speech from competing sounds, further suggesting that one challenge faced by CI users listening in noisy environments is a reduction of reliable segmentation cues.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.