Objectives The purpose of the study was to characterize the psychometric functions that describe task performance in dual-task listening effort measures as a function of signal-to-noise ratio (SNR). Design Younger adults with normal hearing (YNH, n = 24; Experiment 1) and older adults with hearing impairment (OHI, n = 24; Experiment 2) were recruited. Dual-task paradigms wherein the participants performed a primary speech recognition task simultaneously with a secondary task were conducted at a wide range of SNRs. Two different secondary tasks were used: an easy task (i.e., a simple visual reaction-time task) and a hard task (i.e., the incongruent Stroop test). The reaction time (RT) quantified the performance of the secondary task. Results For both participant groups and for both easy and hard secondary tasks, the curves that described the RT as a function of SNR were peak shaped. The RT increased as SNR changed from favorable to intermediate SNRs, and then decreased as SNRs moved from intermediate to unfavorable SNRs. The RT reached its peak (longest time) at the SNRs at which the participants could understand 30% to 50% of the speech. In Experiments 1 and 2 the dual-task trials that had the same SNR were conducted in one block. To determine if the peaked shape of the RT curves was specific to the blocked SNR presentation order used in these experiments, YNH participants were recruited (n = 25; Experiment 3) and dual-task measures, wherein the SNR was varied from trial to trial (i.e., non-blocked), were conducted. The results indicated that, similar to the first two experiments, the RT curves had a peaked shape. Conclusions Secondary task performance was poorer at the intermediate SNRs than at the favorable and unfavorable SNRs. This pattern was observed for both YNH and OHI participants and was not affected by either task type (easy or hard secondary task) or SNR presentation order (blocked or non-blocked). The shorter RT at the unfavorable SNRs (speech intelligibility < 30%) possibly reflects that the participants experienced cognitive overload and/or disengaged themselves from the listening task. The implication of using the dual-task paradigm as a listening effort measure is discussed.
Objectives: The purpose of the current study was to investigate the laboratory efficacy and realworld effectiveness of advanced directional microphones (DM) and digital noise reduction (NR) algorithms (i.e., premium DM/NR features) relative to basic-level DM/NR features of contemporary hearing aids (HAs). The study also examined the effect of premium HAs relative to basic HAs and the effect of DM/NR features relative to no features. Design: Fifty-four older adults with mild-to-moderate hearing loss completed a single-blinded crossover trial. Two HA models, one a less-expensive, basic-level device (basic HA) and the other a more-expensive, advanced-level device (premium HA), were used. The DM/NR features of the basic HAs (i.e., basic features) were adaptive DMs and gain-reduction NR with fewer channels. In contrast, the DM/NR features of the premium HAs (i.e., premium features) included adaptive DMs and gain-reduction NR with more channels, bilateral beamformers, speech-seeking DMs, pinnasimulation directivity, reverberation reduction, impulse noise reduction, wind noise reduction, and spatial noise reduction. The trial consisted of four conditions, which were factorial combinations of HA model (premium vs. basic) and DM/NR feature status (on vs. off). In order to blind participants regarding the HA technology, no technology details were disclosed and minimal training on how to use the features was provided. In each condition participants wore bilateral HAs for five weeks. Outcomes regarding speech understanding, listening effort, sound quality, localization, and HA satisfaction were measured using laboratory tests, retrospective self-reports (i.e., standardized questionnaires), and in-situ self-reports (i.e., self-reports completed in the real world in real time). A smartphone-based ecological momentary assessment system was used to collect in-situ self-reports.
The present study indicated that visual cues and diffuse noise were exceedingly common in real-world speech listening situations, while environments with negative SNRs were relatively rare. The characteristics of speech level, noise level, and SNR, together with the PLS information reported by the present study, can be useful for researchers aiming to design ecologically valid assessment procedures to estimate real-world speech communicative functions for older adults with hearing loss.
Background Ecological momentary assessment (EMA) is a methodology involving repeated assessments/surveys to collect data describing respondents’ current or very recent experiences and related contexts in their natural environments. The use of EMA in audiology research is growing. Purpose This study examined the construct validity (i.e., the degree to which a measurement reflects what it is intended to measure) of EMA in terms of measuring speech understanding and related listening context. Experiment 1 investigated the extent to which individuals can accurately report their speech recognition performance and characterize the listening context in controlled environments. Experiment 2 investigated whether the data aggregated across multiple EMA surveys conducted in uncontrolled, real-world environments would reveal a valid pattern that was consistent with the established relationships between speech understanding, hearing aid use, listening context, and lifestyle. Research Design This is an observational study. Study Sample Twelve and twenty-seven adults with hearing impairment participated in Experiments 1 and 2, respectively. Data Collection and Analysis In the laboratory testing of Experiment 1, participants estimated their speech recognition performance in settings wherein the signal-to-noise ratio was fixed or constantly varied across sentences. In the field testing the participants reported the listening context (e.g., noisiness level) of several semicontrolled real-world conversations. Their reports were compared to (1) the context described by normal-hearing observers and (2) the background noise level measured using a sound level meter. In Experiment 2, participants repeatedly reported the degree of speech understanding, hearing aid use, and listening context using paper-and-pencil journals in their natural environments for 1 week. They also carried noise dosimeters to measure the sound level. The associations between (1) speech understanding, hearing aid use, and listening context, (2) dosimeter sound level and self-reported noisiness level, and (3) dosimeter data and lifestyle quantified using the journals were examined. Results For Experiment 1, the reported and measured speech recognition scores were highly correlated across all test conditions (r = 0.94 to 0.97). The field testing results revealed that most listening context properties reported by the participants were highly consistent with those described by the observers (74–95% consistency), except for noisiness rating (58%). Nevertheless, higher noisiness rating was associated with higher background noise level. For Experiment 2, the EMA results revealed several associations: better speech understanding was associated with the use of hearing aids, front-located speech, and lower dosimeter sound level; higher noisiness rating was associated with higher dosimeter sound level; listeners with more diverse lifestyles tended to have higher dosimeter sound levels. Conclusions Adults with hearing impairment were able to report their listeni...
Purpose The aim of this study was to compare the benefit of self-adjusted personal sound amplification products (PSAPs) to audiologist-fitted hearing aids based on speech recognition, listening effort, and sound quality in ecologically relevant test conditions to estimate real-world effectiveness. Method Twenty-five older adults with bilateral mild-to-moderate hearing loss completed the single-blinded, crossover study. Participants underwent aided testing using 3 PSAPs and a traditional hearing aid, as well as unaided testing. PSAPs were adjusted based on participant preference, whereas the hearing aid was configured using best-practice verification protocols. Audibility provided by the devices was quantified using the Speech Intelligibility Index (American National Standards Institute, 2012). Outcome measures assessing speech recognition, listening effort, and sound quality were administered in ecologically relevant laboratory conditions designed to represent real-world speech listening situations. Results All devices significantly improved Speech Intelligibility Index compared to unaided listening, with the hearing aid providing more audibility than all PSAPs. Results further revealed that, in general, the hearing aid improved speech recognition performance and reduced listening effort significantly more than all PSAPs. Few differences in sound quality were observed between devices. All PSAPs improved speech recognition and listening effort compared to unaided testing. Conclusions Hearing aids fitted using best-practice verification protocols were capable of providing more aided audibility, better speech recognition performance, and lower listening effort compared to the PSAPs tested in the current study. Differences in sound quality between the devices were minimal. However, because all PSAPs tested in the study significantly improved participants' speech recognition performance and reduced listening effort compared to unaided listening, PSAPs could serve as a budget-friendly option for those who cannot afford traditional amplification.
Compared with linear processing, WDRC creates a noisier sound image and makes listeners less willing to accept noise. However, this negative effect on noise acceptance can be offset by DNR, regardless of microphone mode. The hearing aid output SNR derived using the phase-inversion technique can predict aided ANL across different combinations of signal-processing schemes. These results suggest a close relationship between aided ANL, signal-processing scheme, and hearing aid output SNR.
Objectives The dual-task paradigm has been widely used to measure listening effort. The primary objectives of the study were to (1) investigate the effect of hearing aid amplification and a hearing aid directional technology on listening effort measured by a complicated, more real world dual-task paradigm, and (2) compare the results obtained with this paradigm to a simpler laboratory-style dual-task paradigm. Design The listening effort of adults with hearing impairment was measured using two dual-task paradigms, wherein participants performed a speech recognition task simultaneously with either a driving task in a simulator or a visual reaction-time task in a sound-treated booth. The speech materials and road noises for the speech recognition task were recorded in a van traveling on the highway in three hearing aid conditions: unaided, aided with omni directional processing (OMNI), and aided with directional processing (DIR). The change in the driving task or the visual reaction-time task performance across the conditions quantified the change in listening effort. Results Compared to the driving-only condition, driving performance declined significantly with the addition of the speech recognition task. Although the speech recognition score was higher in the OMNI and DIR conditions than in the unaided condition, driving performance was similar across these three conditions, suggesting that listening effort was not affected by amplification and directional processing. Results from the simple dual-task paradigm showed a similar trend: hearing aid technologies improved speech recognition performance, but did not affect performance in the visual reaction-time task (i.e., reduce listening effort). The correlation between listening effort measured using the driving paradigm and the visual reaction-time task paradigm was significant. The finding showing that our older (56 to 85 years old) participants’ better speech recognition performance did not result in reduced listening effort was not consistent with literature that evaluated younger (approximately 20 years old), normal hearing adults. Because of this, a follow-up study was conducted. In the follow-up study, the visual reaction-time dual-task experiment using the same speech materials and road noises was repeated on younger adults with normal hearing. Contrary to findings with older participants, the results indicated that the directional technology significantly improved performance in both speech recognition and visual reaction-time tasks. Conclusions Adding a speech listening task to driving undermined driving performance. Hearing aid technologies significantly improved speech recognition while driving, but did not significantly reduce listening effort. Listening effort measured by dual-task experiments using a simulated real-world driving task and a conventional laboratory-style task was generally consistent. For a given listening environment, the benefit of hearing aid technologies on listening effort measured from younger adults with normal hearing m...
The results suggested the possibility of directly comparing ANL measures carried out in different countries using different languages. However, it remains unclear if the ISTS can serve as an international ANL stimulus.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.