Background: Hearing loss is becoming a real social and health problem. Its prevalence in the elderly is an epidemic. The risk of developing hearing loss is also growing among younger people. If left untreated, hearing loss can perpetuate development of neurodegenerative diseases, including dementia. Despite recent advancements in hearing aid (HA) and cochlear implant (CI) technologies, hearing impaired users still encounter significant practical and social challenges, with or without aids. In particular, they all struggle with understanding speech in challenging acoustic environments, especially in presence of a competing speaker. Objectives: In the current proof-of-concept study we tested whether multisensory stimulation, pairing audition and a minimal-size touch device would improve intelligibility of speech in noise. Methods: To this aim we developed an audio-to-tactile sensory substitution device (SSD) transforming low-frequency speech signals into tactile vibrations delivered on two finger tips. Based on the inverse effectiveness law, i.e., multisensory enhancement is strongest when signal-to-noise ratio is lowest between senses, we embedded non-native language stimuli in speech-like noise and paired it with a low-frequency input conveyed through touch. Results: We found immediate and robust improvement in speech recognition (i.e. in the Signal-To-Noise-ratio) in the multisensory condition without any training, at a group level as well as in every participant. The reported improvement at the group-level of 6 dB was indeed major considering that an increase of 10 dB represents a doubling of the perceived loudness. Conclusions: These results are especially relevant when compared to previous SSD studies showing effects in behavior only after a demanding cognitive training. We discuss the implications of our results for development of SSDs and of specific rehabilitation programs for the hearing impaired either using or not using HAs or CIs. We also discuss the potential application of such a set-up for sense augmentation, such as when learning a new language.
The aim of the study was to evaluate mental distress and health-related quality of life in patients with bilateral partial deafness (high-frequency sensorineural hearing loss) before cochlear implantation, with respect to their audiological performance and time of onset of the hearing impairment. Thirty-one patients and 31 normal-hearing individuals were administered the Beck Depression Inventory (BDI), the State-Trait-Anxiety-Inventory (STAI) and the World Health Organization Quality of Life-BREF questionnaire (WHOQOL-BREF). Patients also completed the Nijmegen-Cochlear-Implant-Questionnaire (NCIQ), a tool for evaluation of quality of life related to hearing loss. Patients revealed increased depressive and anxiety symptoms, as well as decreased health-related quality of life (psychological health, physical health), in comparison with their healthy counterparts (t tests, p < 0.05). Furthermore, a General Linear Model demonstrated in patients with a prelingual onset of hearing loss enhanced self-evaluated social interactions and activity (NCIQ), when their outcomes were contrasted with those obtained in individuals with postlingual partial deafness (p < 0.05). The study failed to show any effect of collateral tinnitus. Patients not using hearing aids had better audiological performance and, therefore, better sound perception and speech production, as measured with NCIQ. There was no effect of hearing aid use with respect to mental distress. Additional statistically significant correlations seen in patients included those between a steeper slope hearing loss configuration (averaged pure-tone thresholds at 1 and 2 kHz with subtracted threshold at 0.5 kHz) and better audiometric speech detection, between audiometric thresholds and the subjectively rated sound perception (NCIQ), as well as left-ear audiometric word recognition scores and the subjectively perceived ability to recognize advanced sounds (NCIQ). In addition, a longer duration of postlingual deafness, as well as a younger age at the onset were both related to worse speech detection thresholds. The results of the study provide evidence that successful rehabilitation in patients with partial deafness might have to go beyond the standard speech therapy. Enhancement of the regular diagnostic assessment with additional psychological tools is highly recommended. Further investigation is required as to the role of functional residual hearing, hearing aid use and tinnitus, in relation to future outcomes of cochlear implantation.
Understanding speech in background noise is challenging. Wearing face-masks, imposed by the COVID19-pandemics, makes it even harder. We developed a multi-sensory setup, including a sensory substitution device (SSD) that can deliver speech simultaneously through audition and as vibrations on the fingertips. The vibrations correspond to low frequencies extracted from the speech input. We trained two groups of non-native English speakers in understanding distorted speech in noise. After a short session (30–45 min) of repeating sentences, with or without concurrent matching vibrations, we showed comparable mean group improvement of 14–16 dB in Speech Reception Threshold (SRT) in two test conditions, i.e., when the participants were asked to repeat sentences only from hearing and also when matching vibrations on fingertips were present. This is a very strong effect, if one considers that a 10 dB difference corresponds to doubling of the perceived loudness. The number of sentence repetitions needed for both types of training to complete the task was comparable. Meanwhile, the mean group SNR for the audio-tactile training (14.7 ± 8.7) was significantly lower (harder) than for the auditory training (23.9 ± 11.8), which indicates a potential facilitating effect of the added vibrations. In addition, both before and after training most of the participants (70–80%) showed better performance (by mean 4–6 dB) in speech-in-noise understanding when the audio sentences were accompanied with matching vibrations. This is the same magnitude of multisensory benefit that we reported, with no training at all, in our previous study using the same experimental procedures. After training, performance in this test condition was also best in both groups (SRT ~ 2 dB). The least significant effect of both training types was found in the third test condition, i.e. when participants were repeating sentences accompanied with non-matching tactile vibrations and the performance in this condition was also poorest after training. The results indicate that both types of training may remove some level of difficulty in sound perception, which might enable a more proper use of speech inputs delivered via vibrotactile stimulation. We discuss the implications of these novel findings with respect to basic science. In particular, we show that even in adulthood, i.e. long after the classical “critical periods” of development have passed, a new pairing between a certain computation (here, speech processing) and an atypical sensory modality (here, touch) can be established and trained, and that this process can be rapid and intuitive. We further present possible applications of our training program and the SSD for auditory rehabilitation in patients with hearing (and sight) deficits, as well as healthy individuals in suboptimal acoustic situations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.