The aim of this study was to find an objective estimate of individual, complete loudness growth functions based on auditory steady-state responses. Both normal-hearing and hearing-impaired listeners were involved in two behavioral loudness growth tasks and one EEG recording session. Behavioral loudness growth was measured with Absolute Magnitude Estimation and a Graphic Rating Scale with loudness categories. Stimuli were sinusoidally amplitude-modulated sinusoids with carrier frequencies of either 500 Hz or 2000 Hz, a modulation frequency of 40 Hz, a duration of 1 s, and presented at intensities encompassing the participants' dynamic ranges. Auditory steady-state responses were evoked by the same stimuli using durations of at least 5 min. Results showed that there was a good correspondence between the relative growth of the auditory steady-state response amplitudes and the behavioral loudness growth responses for each participant of both groups of listeners. This demonstrates the potential for a more individual, objective, and automatic fitting of hearing aids in future clinical practice.
The decrease in amplitude of ASSRs over time (92 sec) is small. Consequently, it is safe to use ASSRs in clinical practice, and additional correction factors for objective hearing assessments are not needed. Because only small decreases in amplitudes were found, loudness adaptation is probably not reflected by the ASSRs.
At lows ensation levels, loudness adaptation is described as ad ecrease in loudness judgment overt ime of a steady,fixed-intensity auditory stimulus, presented monaurally.Similarly,athigh sensation levels, loudness enhancement is described as an increase in loudness judgment overtime. In the present study,loudness adaptation and loudness enhancement were measured for unmodulated sinusoids, sinusoidally amplitude-modulated sinusoids, and mixed-modulated sinusoids. Each stimulus had acarrier frequencyof500 or 2000 Hz, waspresented at 30 or 70 dB SL (sensation level),a nd had ad uration of 302 s. The modulation frequencyw as 40 Hz. Loudness adaptation percentages were measured using as uccessive magnitude estimation task. At 70 dB SL, small amounts of loudness enhancement were found at 500 Hz, and no loudness enhancement wasfound at 2000 Hz, for all modulation types. At 30 dB SL, loudness adaptation wasfound for all modulation types with more adaptation at 2000 Hz than at 500 Hz. Adifference in loudness adaptation wasfound between modulation types, with am ore pronounced difference at 500 Hz. Mixed-modulated sinusoids showed more adaptation than unmodulated sinusoids, which in turn showed more adaptation than sinusoidally amplitude-modulated sinusoids. The sinusoidally amplitude-modulated stimulus condition did no longer showl oudness adaptation at 30 dB SL and 500 Hz. In addition, if loudness adaptation occurred, moderate buts ignificant correlations were found between the participants' absolute thresholds in dB SPLand their loudness adaptation percentages at 30 dB SL.
In Part I, we investigated 40-Hz auditory steady-state response (ASSR) amplitudes for the use of objective loudness balancing across the ears for normal-hearing participants and found median across-ear ratios in ASSR amplitudes close to 1. In this part, we further investigated whether the ASSR can be used to estimate binaural loudness balance for listeners with asymmetric hearing, for whom binaural loudness balancing is of particular interest. We tested participants with asymmetric hearing and participants with bimodal hearing, who hear with electrical stimulation through a cochlear implant (CI) in one ear and with acoustical stimulation in the other ear. Behavioral loudness balancing was performed at different percentages of the dynamic range. Acoustical carrier frequencies were 500, 1000, or 2000 Hz, and CI channels were stimulated in apical or middle regions in the cochlea. For both groups, the ASSR amplitudes at balanced loudness levels were similar for the two ears, with median ratios between left and right ear stimulation close to 1. However, individual variability was observed. For participants with asymmetric hearing loss, the difference between the behavioral balanced levels and the ASSR-predicted balanced levels was smaller than 10 dB in 50% and 56% of cases, for 500 Hz and 2000 Hz, respectively. For bimodal listeners, these percentages were 89% and 60%. Apical CI channels yielded significantly better results (median difference near 0 dB) than middle CI channels, which had a median difference of −7.25 dB.
Background: Ecological momentary assessment (EMA) methods allow for real-time, real-world survey data collection. Studies with adults have reported EMA as a feasible and valid tool in the measurement of real-world listening experience. Research is needed to investigate the use of EMA with children who wear hearing aids. Objectives: This study explored the implementation of EMA with children using a single-blinded repeated measures design to evaluate real-world aided outcome. Methods: Twenty-nine children, aged 7-17, used manual program switching to access hearing aid programs, fitted according to Desired Sensation Level (DSL) version 5.0 child quiet and noise prescriptive targets. Aided outcome was measured using participant-triggered twice-daily EMA entries, across listening situations and hearing dimensions. Results: Adherence to the EMA protocol by the children was high (82.4% compliance rate). Speech loudness, understanding and preference results were found to relate to both the hearing aid program and the listening situation. Aided outcomes related to prescription-based noise management were found to be highest in noisy situations. Conclusions: Mobile device-based EMA methods can be used to inform daily life listening experience with children. Prescription-based noise management was found to decrease perceived loudness in noisy, non-school environments; this should be evaluated in combination with hearing aid noise reductions features.
BackgroundPeople who use a cochlear implant together with a contralateral hearing aid—so-called bimodal listeners—have poor localisation abilities and sounds are often not balanced in loudness across ears. In order to address the latter, a loudness balancing algorithm was created, which equalises the loudness growth functions for the two ears. The algorithm uses loudness models in order to continuously adjust the two signals to loudness targets. Previous tests demonstrated improved binaural balance, improved localisation, and better speech intelligibility in quiet for soft phonemes. In those studies, however, all stimuli were preprocessed so spontaneous head movements and individual head-related transfer functions were not taken into account. Furthermore, the hearing aid processing was linear.Study designIn the present study, we simplified the acoustical loudness model and implemented the algorithm in a real-time system. We tested bimodal listeners on speech perception and on sound localisation, both in normal loudness growth configuration and in a configuration with a modified loudness growth function. We also used linear and compressive hearing aids.ResultsThe comparison between the original acoustical loudness model and the new simplified model showed loudness differences below 3% for almost all tested speech-like stimuli and levels. We found no effect of balancing the loudness growth across ears for speech perception ability in quiet and in noise. We found some small improvements in localisation performance. Further investigation with a larger sample size is required.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.