In the framework of the European HearCom project, promising signal enhancement algorithms were developed and evaluated for future use in hearing instruments. To assess the algorithms' performance, five of the algorithms were selected and implemented on a common real-time hardware/software platform. Four test centers in Belgium, The Netherlands, Germany, and Switzerland perceptually evaluated the algorithms. Listening tests were performed with large numbers of normal-hearing and hearing-impaired subjects. Three perceptual measures were used: speech reception threshold (SRT), listening effort scaling, and preference rating. Tests were carried out in two types of rooms. Speech was presented in multitalker babble arriving from one or three loudspeakers. In a pseudo-diffuse noise scenario, only one algorithm, the spatially preprocessed speech-distortion-weighted multi-channel Wiener filtering, provided a SRT improvement relative to the unprocessed condition. Despite the general lack of improvement in SRT, some algorithms were preferred over the unprocessed condition at all tested signal-to-noise ratios (SNRs). These effects were found across different subject groups and test sites. The listening effort scores were less consistent over test sites. For the algorithms that did not affect speech intelligibility, a reduction in listening effort was observed at 0 dB SNR.
An increased listing effort represents a major problem in humans with hearing impairment. Neurodiagnostic methods for an objective listening effort estimation might support hearing instrument fitting procedures. However the cognitive neurodynamics of listening effort is far from being understood and its neural correlates have not been identified yet. In this paper we analyze the cognitive neurodynamics of listening effort by using methods of forward neurophysical modeling and time-scale electroencephalographic neurodiagnostics. In particular, we present a forward neurophysical model for auditory late responses (ALRs) as large-scale listening effort correlates. Here endogenously driven top-down projections related to listening effort are mapped to corticothalamic feedback pathways which were analyzed for the selective attention neurodynamics before. We show that this model represents well the time-scale phase stability analysis of experimental electroencephalographic data from auditory discrimination paradigms. It is concluded that the proposed neurophysical and neuropsychological framework is appropriate for the analysis of listening effort and might help to develop objective electroencephalographic methods for its estimation in future.
Modern hearing aid fitting could be revolutionized by the availability of objective methods for the listening effort estimation. However experimental and theoretical research dealing with this subject is still in its infancy. In this paper we present first results towards a neuropsychological and neurophysical model for the objective estimation of the listening effort by electroencephalographic data. Our model is based on intended endogenously driven top-down projections represented by corticothalamic feedback dynamics for auditory stream selection and their large-scale correlates in auditory evoked late responses. The predictions of the presented model are compared to experimental data obtained during different auditory tasks which required a graduated effort for their solutions. The experimental data verified the model predictions.It is concluded that the proposed neuropsychological and neurophysical modeling of stream selection provides an appropriate framework for listening effort estimation. The presented preliminary results of an ongoing study are encouraging,however, further focal research is necessary in order to estimate in how far the presented model and future extensions might support modern hearing aid fitting in practice.
An increased listening effort represents a major problem in humans with hearing impairment. Neurodiagnostic methods for an objective listening effort estimation could revolutionize auditory rehabilitation. However the cognitive neurodynamics of listening effort is not understood and research related its neural correlates is still in its infancy. In this paper we present a phase clustering analysis of large-scale listening effort correlates in auditory late responses (ALRs). For this we apply the complex wavelet transform as well as tight Gabor Frame (TGF) operators. We show (a) that phase clustering on the unit circle can separate ALR data from auditory paradigms which require a graduated effort for their solution; (b) the application of TGFs for an inverse artificial phase stabilization at the alpha/theta-border enlarges the endogenously driven listening effort correlates in the reconstructed time- domain waveforms. It is concluded that listening effort correlates can be extracted from ALR sequences using an instantaneous phase clustering analysis, at least by means of the applied experimental pure tone paradigm.
In recent years, after a period of disillusion in the eld of neural processing and adaptive algorithms, neural networks have been reconsidered for solving complex technical tasks. The problem of neural network training is the presentation of input/output data showing an appropriate information content which represent a given problem. The training of a neural structure will de nitely lead to poor results if the relation between input and output signals shows no functional dependence but a pure stochastic behaviour. This paper is concerned with the identi cation of the most relevant input-output data pairs for neural networks, using the concept of mutual information. A general, quantitative method is demonstrated for identifying the most relevant points from the transient measured data of a combustion engine. In this context mutual information is employed for the problem of determining the 50 per cent energy conversion point solely from the combustion chamber pressure during one combustion cycle.
Digital hearing aids of today allow the application of advanced signal processing strategies. In recent years a number of promising signal processing approaches have been designed and developed. However, most of these different evolutions have been evaluated only in a limited way. Within the framework of the HEARCOM EUresearch project a number of signal enhancement techniques have been further developed and evaluated based on a representative set of real-life recordings and physical performance measures. Different auditory profiles, representing common categories of hearing aid users, have been taken into account. A selection of 5 of these signal enhancement techniques (single-channel noise suppression, blind source separation, dereverberation, multi-microphone adaptive processing, feedback reduction) has been implemented on a single common hardand software test platform, the Master Hearing Aid (MHA). These signal processing strategies have been evaluated perceptually based on speech reception thresholds, listening effort and preference rating, at 5 different test-sites for a number of speech-and-noise listening scenarios. Fifty normal hearing subjects and 100 hearing aid users according to 2 auditory profiles, took part in this study.
The term "auditory scene analysis" generally refers to a categorization of a given acoustic situation based on the acoustic signal only, where the results determine the subsequent processing of the acoustic signal within some auditory context. According to this definition, several approaches can be differentiated in the field of hearing instrument development. They differ in the computational complexity of the particular analysis methods applied, as well as in the subsequent action. Some of these approaches have been realized in commercially available hearing instruments, others lie still ahead. A simple example of the former category is noise reduction algorithms that address different classes of noises, examples of the latter are MPEG4-like virtual arrangements of media objects. In the presentation, different approaches will be discussed in terms of potential benefit and technical realization, as well as their limitations. For approaches already realized in commercially available hearing instruments, the expected benefit will be aligned with results from clinical studies.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.