Pure-tone audiometry still represents the main measure to characterize individual hearing loss and the basis for hearing-aid fitting. However, the perceptual consequences of hearing loss are typically associated not only with a loss of sensitivity but also with a loss of clarity that is not captured by the audiogram. A detailed characterization of a hearing loss may be complex and needs to be simplified to efficiently explore the specific compensation needs of the individual listener. Here, it is hypothesized that any listener's hearing profile can be characterized along two dimensions of distortion: Type I and Type II. While Type I can be linked to factors affecting audibility, Type II reflects non-audibility-related distortions. To test this hypothesis, the individual performance data from two previous studies were reanalyzed using an unsupervised-learning technique to identify extreme patterns in the data, thus forming the basis for different auditory profiles. Next, a decision tree was determined to classify the listeners into one of the profiles. The analysis provides evidence for the existence of four profiles in the data. The most significant predictors for profile identification were related to binaural processing, auditory nonlinearity, and speech-in-noise perception. This approach could be valuable for analyzing other data sets to select the most relevant tests for auditory profiling and propose more efficient hearing-deficit compensation strategies.
The sources and consequences of a sensorineural hearing loss are diverse. While several approaches have aimed at disentangling the physiological and perceptual consequences of different etiologies, hearing deficit characterization and rehabilitation have been dominated by the results from pure-tone audiometry. Here, we present a novel approach based on data-driven profiling of perceptual auditory deficits that attempts to represent auditory phenomena that are usually hidden by, or entangled with, audibility loss. We hypothesize that the hearing deficits of a given listener, both at hearing threshold and at suprathreshold sound levels, result from two independent types of “auditory distortions.” In this two-dimensional space, four distinct “auditory profiles” can be identified. To test this hypothesis, we gathered a data set consisting of a heterogeneous group of listeners that were evaluated using measures of speech intelligibility, loudness perception, binaural processing abilities, and spectrotemporal resolution. The subsequent analysis revealed that distortion type-I was associated with elevated hearing thresholds at high frequencies and reduced temporal masking release and was significantly correlated with elevated speech reception thresholds in noise. Distortion type-II was associated with low-frequency hearing loss and abnormally steep loudness functions. The auditory profiles represent four robust subpopulations of hearing-impaired listeners that exhibit different degrees of perceptual distortions. The four auditory profiles may provide a valuable basis for improved hearing rehabilitation, for example, through profile-based hearing-aid fitting.
Data-driven profiling allows uncovering complex hidden structures in a dataset and has been used as a diagnostic tool in various fields of work. In audiology, the clinical characterization of hearing deficits for hearing-aid fitting is typically based on the pure-tone audiogram only. Implicitly, this relies on the assumption that the audiogram can predict a listener's supra-threshold hearing abilities. Sanchez-Lopez et al. [Trends in hearing vol. 22 (2018)] hypothesized that the hearing deficits of a given listener, both at hearing threshold and at supra-threshold sound levels, result from two independent types of "auditory distortions". The authors performed a data-driven analysis of two large datasets with results from numerous tests, which led to the identification of four distinct "auditory profiles". However, the definition of the two types of distortion was challenged by differences between the two datasets in terms of the selected tests and type of listeners included in the datasets. Here, a new dataset was generated with the aim of overcoming those limitations. A heterogeneous group of listeners (N = 75) was tested using measures of speech intelligibility, loudness perception, binaural processing abilities and spectro-temporal resolution. The subsequent data analysis allowed refining the auditory profiles proposed by Sanchez-Lopez et al. (2018). Besides, a robust iterative data-driven method is proposed here to reduce the influence of the individual data in the definition of the auditory profiles. The updated auditory profiles may provide a useful basis for improved hearing rehabilitation, e.g. through profile-based hearing-aid fitting.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.