The stimuli that are acoustically most similar (direct and early) result in electrophysiological responses that are not significantly different, whereas the stimuli that are acoustically most different (direct and late, early and late) result in responses that are significantly different across all response measures. These findings provide insights toward the understanding of the effects of the different components of the RIRs on auditory processing of speech.
This study introduces an improved method to investigate the effects of reverberation using the speech-evoked auditory brainstem response (ABR) that more realistically captures the influence of self- and overlap-masking induced by room reverberation. Speech-evoked ABR was measured under three acoustic scenarios: anechoic, mild reverberation with dominance of early reflections, and severe reverberation with dominance of late reverberation. Responses were significantly weaker and had longer latencies with severe reverberation relative to anechoic and mild reverberation. Although larger responses and shorter latencies were observed with mild reverberation than anechoic, possibly due to early reflections, these reached significance in only one of six ABR response measures.
Recent advances in machine learning have led to a surge of interest in classification of the auditory brainstem response. By conducting a search in the PubMed, Google Scholar, SpringerLink, ScienceDirect, and Scopus databases, it was possible to identify twelve studies that explored the use of machine learning to classify the auditory brainstem response as a complementary and objective method to (a) help clinicians better diagnose hearing impairment by discerning between healthy and pathological auditory brainstem response waveforms, (b) present a neural marker for potential applications in hearing aid tuning, and (c) provide a biometric marker for discriminating between subjects. A comparison between the studies presented in this review is not possible as they used different test subjects, group sizes, and stimuli, and evaluated ABR differently. Instead, the result of these studies will be presented and their limitations as well as their potential applications will be discussed. Overall, the findings of these studies suggest that ABR classification using machine learning is a promising tool for assessing patients with hearing loss, optimizing technologies for tuning hearing aids, and discriminating between subjects.
This study investigates the inter-modality influence on the brainstem using a mental task (arithmetic exercise). Frequency Following Responses were recorded in quiet and noise, across four stimuli conditions (No Task, Easy, Medium, and Difficult). For the No Task, subjects were instructed to direct their attention to the presented speech vowel while no mental task was performed. For the Easy, Medium, and Difficult conditions, subjects were instructed to direct their attention to the mental task while ignoring simultaneously presented speech vowel /a/. Results from this study suggest that top-down influences such as selective attention and working memory have no significant effects at the level of the brainstem in both listening backgrounds (quiet and noise).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.