Despite their remarkable success in bringing spoken language to hearing impaired listeners, the signal transmitted through cochlear implants (CIs) remains impoverished in spectro-temporal fine structure. As a consequence, pitch-dominant information such as voice emotion, is diminished. For young children, the ability to correctly identify the mood/intent of the speaker (which may not always be visible in their facial expression) is an important aspect of social and linguistic development. Previous work in the field has shown that children with cochlear implants (cCI) have significant deficits in voice emotion recognition relative to their normally hearing peers (cNH). Here, we report on voice emotion recognition by a cohort of 36 school-aged cCI. Additionally, we provide for the first time, a comparison of their performance to that of cNH and NH adults (aNH) listening to CI simulations of the same stimuli. We also provide comparisons to the performance of adult listeners with CIs (aCI), most of whom learned language primarily through normal acoustic hearing. Results indicate that, despite strong variability, on average, cCI perform similarly to their adult counterparts; that both groups’ mean performance is similar to aNHs’ performance with 8-channel noise-vocoded speech; that cNH achieve excellent scores in voice emotion recognition with full-spectrum speech, but on average, show significantly poorer scores than aNH with 8-channel noise-vocoded speech. A strong developmental effect was observed in the cNH with noise-vocoded speech in this task. These results point to the considerable benefit obtained by cochlear-implanted children from their devices, but also underscore the need for further research and development in this important and neglected area.
These results indicate that cognitive function and age play important roles in children's ability to process emotional prosody in spectrally degraded speech. The lack of an interaction between the degree of spectral resolution and children's age further suggests that younger and older children may benefit similarly from improvements in spectral resolution. The findings imply that younger and older children with cochlear implants may benefit similarly from technical advances that improve spectral resolution.
Two experiments investigated the ability of 17 school-aged children to process purely temporal and spectro-temporal cues that signal changes in pitch. Percentage correct was measured for the discrimination of sinusoidal amplitude modulation rate (AMR) of broadband noise in experiment 1 and for the discrimination of fundamental frequency (F0) of broadband sine-phase harmonic complexes in experiment 2. The reference AMR was 100 Hz as was the reference F0. A child-friendly interface helped listeners to remain attentive to the task. Data were fitted using a maximum-likelihood technique that extracted threshold, slope, and lapse rate. All thresholds were subsequently standardized to a common d 0 value equal to 0.77. There were relatively large individual differences across listeners: eight had relatively adult-like thresholds in both tasks and nine had higher thresholds. However, these individual differences did not vary systematically with age, over the span of 6-16 yr. Thresholds were correlated across the two tasks and were about nine times finer for F0 discrimination than for AMR discrimination as has been previously observed in adults.
This study investigated whether recognition of time-compressed speech predicts recognition of natural fast-rate speech, and whether this relationship is influenced by listener age. High and low context sentences were presented to younger and older normal-hearing adults at a normal speech rate, naturally fast speech rate, and fast rate implemented by time compressing the normal-rate sentences. Recognition of time-compressed sentences over-estimated recognition of natural fast sentences for both groups, especially for older listeners. The findings suggest that older listeners are at a much greater disadvantage when listening to natural fast speech than would be predicted by recognition performance for time-compressed speech.
In the real world, listeners often need to track multiple simultaneous sources in order to maintain awareness of the relevant sounds in their environments. Thus, there is reason to believe that simple single source sound localization tasks may not accurately capture the impact that a listening device such as a hearing aid might have on a listener's level of auditory awareness. In this experiment, 10 normal hearing listeners and 20 hearing impaired listeners were tested in a task that required them to identify and localize sound sources in three different listening tasks of increasing complexity: a single-source localization task, where listeners identified and localized a single sound source presented in isolation; an added source task, where listeners identified and localized a source that was added to an existing auditory scene, and a remove source task, where listeners identified and localized a source that was removed from an existing auditory scene. Hearing impaired listeners completed these tasks with and without the use of their previously fit hearing aids. As expected, the results show that performance decreased both with increasing task complexity and with the number of competing sound sources in the acoustic scene. The results also show that the added source task was as sensitive to differences in performance across listening conditions as the standard localization task, but that it correlated with a different pattern of subjective and objective performance measures across listeners. This result suggests that a measure of complex auditory situation awareness such as the one tested here may be a useful tool for evaluating differences in performance across different types of listening devices, such as hearing aids or hearing protection devices.
Although many studies have evaluated the performance of virtual audio displays with normal hearing listeners, very little information is available on the effect that hearing loss has on the localization of virtual sounds. In this study, normal hearing (NH) and hearing impaired (HI) listeners were asked to localize noise stimuli with short (250 ms), medium (1000 ms), and long (4000 ms) durations both in the free field and with a non-individualized head-tracked virtual audio display. The results show that the HI listeners localized sounds less accurately than the NH listeners, and that both groups consistently localized virtual sounds less accurately than free-field sounds. These results indicate that HI listeners are sensitive to individual differences in head related transfer functions (HRTFs), which means that they might have difficulty using auditory display systems that rely on generic HRTFs to control the apparent locations of virtual sounds. However, the results also reveal a high correlation between free-field and virtual localization performance in the HI listeners. This suggests that it may be feasible to use non-individualized virtual audio display systems to predict the auditory localization performance of HI listeners in clinical environments where free-field speaker arrays are not available.
A test that measures speech recognition in the presence of a spatially separated competing talker would be useful in measuring suprathreshold speech-in-noise deficits that cannot be readily predicted from standard audiometric evaluation. Including such a test can likely reduce the gap between patient complaints and their clinical evaluation.
Purpose: The objectives of this study were to (a) describe normative ranges—expressed as reference intervals (RIs)—for vestibular and balance function tests in a cohort of Service Members and Veterans (SMVs) and (b) to describe the interrater reliability of these tests. Method: As part of the Defense and Veterans Brain Injury Center (DVBIC)/Traumatic Brain Injury Center of Excellence 15-year Longitudinal Traumatic Brain Injury (TBI) Study, participants completed the following: vestibulo-ocular reflex suppression, visual-vestibular enhancement, subjective visual vertical, subjective visual horizontal, sinusoidal harmonic acceleration, the computerized rotational head impulse test (crHIT), and the sensory organization test. RIs were calculated using nonparametric methods and interrater reliability was assessed using intraclass correlation coefficients between three audiologists who independently reviewed and cleaned the data. Results: Reference populations for each outcome measure comprised 40 to 72 individuals, 19 to 61 years of age, who served either as noninjured controls (NIC) or injured controls (IC) in the 15-year study; none had a history of TBI or blast exposure. A subset of 15 SMVs from the NIC, IC, and TBI groups were included in the interrater reliability calculations. RIs are reported for 27 outcome measures from the seven rotational vestibular and balance tests. Interrater reliability was considered excellent for all tests except the crHIT, which was found to have good interrater reliability. Conclusion: This study provides clinicians and scientists with important information regarding normative ranges and interrater reliability for rotational vestibular and balance tests in SMVs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.