Objectives. Previous work has suggested that individual characteristics, including amount of hearing loss, age, and working memory ability, may affect response to hearing aid signal processing. The present study aims to extend work using metrics to quantify cumulative signal modifications under simulated conditions to real hearing aids worn in everyday listening environments. Specifically, the goal was to determine whether individual factors such as working memory, age, and degree of hearing loss play a role in explaining how listeners respond to signal modifications caused by signal processing in real hearing aids, worn in the listener's everyday environment, over a period of time.Design. Participants were older adults (age range 54-90 years) with symmetrical mild-tomoderate sensorineural hearing loss. We contrasted two distinct hearing aid fittings: one designated as mild signal processing and one as strong signal processing. Forty-nine older adults were enrolled in the study and thirty-five participants had valid outcome data for both hearing aid fittings. The difference between the two settings related to the wide dynamic range compression (WDRC) and frequency compression features. Order of fittings was randomly assigned for each participant. Each fitting was worn in the listener's everyday environments for approximately five weeks prior to outcome measurements. The trial was double blind, with neither the participant nor the tester aware of the specific fitting at the time of the outcome testing. Baseline measures included a full audiometric evaluation as well as working memory and spectral and temporal resolution. The outcome was aided speech recognition in noise. Individual response to signal processing 4Results. The two hearing aid fittings resulted in different amounts of signal modification, with significantly less modification for the mild signal processing fitting. The effect of signal processing on speech intelligibility depended on an individual's age, working memory capacity, and degree of hearing loss. Adults who were older demonstrated progressively poorer speech recognition at high levels of signal modification. Working memory interacted with signal processing, with individuals with lower working memory demonstrating low speech intelligibility in noise with both processing conditions, and individuals with higher working memory demonstrating better speech intelligibility in noise with the mild signal processing fitting. Amount of hearing loss interacted with signal processing, but the effects were very small. Individual spectral and temporal resolution did not contribute significantly to the variance in the speech intelligibility score.Conclusions. When the consequences of a specific set of hearing aid signal processing characteristics were quantified in terms of overall signal modification, there was a relationship between participant characteristics and recognition of speech at different levels of signal modification. Because the hearing aid fittings used were constrained to specific fitting pa...
Recent evidence has shown that nonlinguistic sounds co-occurring with spoken words may be retained in memory and affect later retrieval of the words. This sound-specificity effect shares many characteristics with the classic voice-specificity effect. In this study, we argue that the sound-specificity effect is conditional upon the context in which the word and sound coexist. Specifically, we argue that, besides co-occurrence, integrality between words and sounds is a crucial factor in the emergence of the effect. In two recognition-memory experiments, we compared the emergence of voice and sound specificity effects. In Experiment 1 , we examined two conditions where integrality is high. Namely, the classic voice-specificity effect (Exp. 1a) was compared with a condition in which the intensity envelope of a background sound was modulated along the intensity envelope of the accompanying spoken word (Exp. 1b). Results revealed a robust voice-specificity effect and, critically, a comparable sound-specificity effect: A change in the paired sound from exposure to test led to a decrease in word-recognition performance. In the second experiment, we sought to disentangle the contribution of integrality from a mere co-occurrence context effect by removing the intensity modulation. The absence of integrality led to the disappearance of the sound-specificity effect. Taken together, the results suggest that the assimilation of background sounds into memory cannot be reduced to a simple context effect. Rather, it is conditioned by the extent to which words and sounds are perceived as integral as opposed to distinct auditory objects.
Language is one of the most important aspects of human cognition; it represents the way we think, act and communicate with each other. Each language has its own history, background, and form. A language represents a lot of important cultural aspects of the nation speaking it. Languages differ and so do cultures. In this paper we analyze cultural differences between East and West in a multi-linguistic context from a complex networks point of view. There has been considerable work on the topic of cultural differences by psychologist and sociologist. Also studies on complex networks that make use of WordNet have been done, but until now there is no previous work that uses WordNets from different Eastern and Western languages as complex lexical networks in order to obtain possible differences or similarities between the cultures using those respective languages. Our work aims to do this.
Foreign-accented speech recognition is typically tested with linguistically simple materials, which offer a limited window into realistic speech processing. The present study examined the relationship between linguistic structure and talker intelligibility in several sentence-in-noise recognition experiments. Listeners transcribed simple/short and more complex/longer sentences embedded in noise. The sentences were spoken by three talkers of varying intelligibility: one native, one high-, and one low-intelligibility non-native English speakers. The effect of linguistic structure on sentence recognition accuracy was modulated by talker intelligibility. Accuracy was disadvantaged by increasing complexity only for the native and high intelligibility foreign-accented talkers, whereas no such effect was found for the low intelligibility foreign-accented talker. This pattern emerged across conditions: low and high signal-to-noise ratios, mixed and blocked stimulus presentation, and in the absence of a major cue to prosodic structure, the natural pitch contour of the sentences. Moreover, the pattern generalized to a different set of three talkers that matched the intelligibility of the original talkers. Taken together, the results in this study suggest that listeners employ qualitatively different speech processing strategies for low- versus high-intelligibility foreign-accented talkers, with sentence-related linguistic factors only emerging for speech over a threshold of intelligibility. Findings are discussed in the context of alternative accounts.
Previous research indicates that listeners encode both linguistic and indexical specifications of the speech signal in memory. Recent evidence suggests that non-linguistic sounds co-occurring with spoken words are also incorporated in our lexical memory. We argue that this “sound-specificity effect” might not be due so much to a word-sound association as to the different acoustic glimpses of the words that the associated sounds create. In several recognition-memory experiments, we paired spoken words with one of two car honk sounds and varied the level of energetic masking from exposure to test. We did not observe a drop in recognition accuracy for previously heard words when the paired sound changed as long as energetic masking was controlled. However, when we manipulated the temporal overlap between words and honking to create an energetic masking contrast, accuracy dropped. The finding suggests that listeners encode irrelevant non-speech information in memory, but only in certain contexts. Calling for an expansion of the mental lexicon to include non-speech auditory information might be premature. Current work is investigating the effect in non-native listeners of English, and whether maskers that are more integral to the words and hence more difficult to segregate lead to a more robust effect.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.