Human auditory steady-state responses (ASSRs) were recorded using stimulus rates of 78-95 Hz in normal young subjects, in elderly subjects with relatively normal hearing, and in elderly subjects with sensorineural hearing impairment. Amplitude-intensity functions calculated relative to actual sensory thresholds (sensation level or SL) showed that amplitudes increased as stimulus intensity increased. In the hearing-impaired subjects this increase was more rapid at intensities just above threshold ("electrophysiological recruitment") than at higher intensities where the increase was similar to that seen in normal subjects. The thresholds in dB SL for recognizing an ASSR and the intersubject variability of these thresholds decreased with increasing recording time and were lower in the hearing impaired compared to the normal subjects. After 9.8 minutes of recording, the average ASSR thresholds (and standard deviations) were 12.6 +/- 8.7 in the normal subjects, 12.4 +/- 11.9 dB in the normal elderly, and 3.6 +/- 13.5 dB SL in the hearing-impaired subjects.
The objective of this study was to localize the intracerebral generators for auditory steady-state responses. The stimulus was a continuous 1000-Hz tone presented to the right or left ear at 70 dBSPL. The tone was sinusoidally amplitude-modulated to a depth of 100% at 12, 39, or 88 Hz. Responses recorded from 47 electrodes on the head were transformed into the frequency domain. Brain electrical source analysis treated the real and imaginary components of the response in the frequency domain as independent samples. The latency of the source activity was estimated from the phase of the source waveform. The main source model contained a midline brainstem generator with two components (one vertical and lateral) and cortical sources in the left and right supratemporal plane, each containing tangential and radial components. At 88 Hz, the largest activity occurred in the brainstem and subsequent cortical activity was minor. At 39 Hz, the initial brainstem component remained and significant activity also occurred in the cortical sources, with the tangential activity being larger than the radial. The 12-Hz responses were small, but suggested combined activation of both brainstem and cortical sources. Estimated latencies decreased for all source waveforms as modulation frequency increased and were shorter for the brainstem compared to cortical sources. These results suggest that the whole auditory nervous system is activated by modulated tones, with the cortex being more sensitive to slower modulation frequencies.
Humans are better at recognizing human faces than faces of other species. However, it is unclear whether this species sensitivity can be seen at early perceptual stages of face processing and whether it involves species sensitivity for important facial features like the eyes. These questions were addressed by comparing the modulations of the N170 ERP component to faces, eyes and eyeless faces of humans, apes, cats and dogs, presented upright and inverted. Although all faces and isolated eyes yielded larger responses than the control object category (houses), the N170 was shorter and smaller to human than animal faces and larger to human than animal eyes. Most importantly, while the classic inversion effect was found for human faces, animal faces yielded no inversion effect or an opposite inversion effect, as seen for objects, suggesting a different neural process involved for humans faces compared to faces of other species. Thus, in addition to its general face and eye categorical sensitivity, the N170 appears particularly sensitive to the human species for both faces and eyes. The results are discussed in the context of a recent model of the N170 response involving face and eye sensitive neurons (Itier et al., 2007) where the eyes play a central role in face perception. The data support the intuitive idea that eyes are what make animal head fronts look face-like and that proficiency for the human species involves visual expertise for the human eyes.
Multiple auditory steady-state responses were recorded using tonal stimuli that were amplitude-modulated (AM), frequency-modulated (FM) or modulated simultaneously in both amplitude and frequency (mixed modulation or MM). When MM stimuli combined 100% AM and 25% FM (12.5% above and below the carrier frequency) and the maximum frequency occurred simultaneously with maximum amplitude, the MM response was one third larger than the simple AM response. This enhancement occurred at intensities between 50 and 30 dB SPL and at carrier frequencies between 500 and 4000 Hz. The AM and FM components of a MM stimulus generate independent responses that add together to give the MM response. Since AM responses generally occur with a slightly later phase delay than FM responses, the largest MM response is recorded when the maximum frequency of the MM stimulus occurs just after the maximum amplitude.
Noise is usually detrimental to auditory perception. However, recent psychophysical studies have shown that low levels of broadband noise may improve signal detection. Here, we measured auditory evoked fields (AEFs) while participants listened passively to low-pitched and high-pitched tones (Experiment 1) or complex sounds that included a tuned or a mistuned component that yielded the perception of concurrent sound objects (Experiment 2). In both experiments, stimuli were embedded in low or intermediate levels of Gaussian noise or presented without background noise. For each participant, the AEFs were modeled with a pair of dipoles in the superior temporal plane, and the effects of noise were examined on the resulting source waveforms. In both experiments, the N1m was larger when the stimuli were embedded in low background noise than in the no-noise control condition. Complex sounds with a mistuned component generated an object-related negativity that was larger in the low-noise condition. The results show that low-level background noise facilitates AEFs associated with sound onset and can be beneficial for sorting out concurrent sound objects. We suggest that noise-induced increases in transient evoked responses may be mediated via efferent feedback connections between the auditory cortex and lower auditory centers.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.