Human auditory steady-state responses (ASSRs) were recorded using stimulus rates of 78-95 Hz in normal young subjects, in elderly subjects with relatively normal hearing, and in elderly subjects with sensorineural hearing impairment. Amplitude-intensity functions calculated relative to actual sensory thresholds (sensation level or SL) showed that amplitudes increased as stimulus intensity increased. In the hearing-impaired subjects this increase was more rapid at intensities just above threshold ("electrophysiological recruitment") than at higher intensities where the increase was similar to that seen in normal subjects. The thresholds in dB SL for recognizing an ASSR and the intersubject variability of these thresholds decreased with increasing recording time and were lower in the hearing impaired compared to the normal subjects. After 9.8 minutes of recording, the average ASSR thresholds (and standard deviations) were 12.6 +/- 8.7 in the normal subjects, 12.4 +/- 11.9 dB in the normal elderly, and 3.6 +/- 13.5 dB SL in the hearing-impaired subjects.
The objective of this study was to localize the intracerebral generators for auditory steady-state responses. The stimulus was a continuous 1000-Hz tone presented to the right or left ear at 70 dBSPL. The tone was sinusoidally amplitude-modulated to a depth of 100% at 12, 39, or 88 Hz. Responses recorded from 47 electrodes on the head were transformed into the frequency domain. Brain electrical source analysis treated the real and imaginary components of the response in the frequency domain as independent samples. The latency of the source activity was estimated from the phase of the source waveform. The main source model contained a midline brainstem generator with two components (one vertical and lateral) and cortical sources in the left and right supratemporal plane, each containing tangential and radial components. At 88 Hz, the largest activity occurred in the brainstem and subsequent cortical activity was minor. At 39 Hz, the initial brainstem component remained and significant activity also occurred in the cortical sources, with the tangential activity being larger than the radial. The 12-Hz responses were small, but suggested combined activation of both brainstem and cortical sources. Estimated latencies decreased for all source waveforms as modulation frequency increased and were shorter for the brainstem compared to cortical sources. These results suggest that the whole auditory nervous system is activated by modulated tones, with the cortex being more sensitive to slower modulation frequencies.
Humans are better at recognizing human faces than faces of other species. However, it is unclear whether this species sensitivity can be seen at early perceptual stages of face processing and whether it involves species sensitivity for important facial features like the eyes. These questions were addressed by comparing the modulations of the N170 ERP component to faces, eyes and eyeless faces of humans, apes, cats and dogs, presented upright and inverted. Although all faces and isolated eyes yielded larger responses than the control object category (houses), the N170 was shorter and smaller to human than animal faces and larger to human than animal eyes. Most importantly, while the classic inversion effect was found for human faces, animal faces yielded no inversion effect or an opposite inversion effect, as seen for objects, suggesting a different neural process involved for humans faces compared to faces of other species. Thus, in addition to its general face and eye categorical sensitivity, the N170 appears particularly sensitive to the human species for both faces and eyes. The results are discussed in the context of a recent model of the N170 response involving face and eye sensitive neurons (Itier et al., 2007) where the eyes play a central role in face perception. The data support the intuitive idea that eyes are what make animal head fronts look face-like and that proficiency for the human species involves visual expertise for the human eyes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.