Several stories in the popular media have speculated that it may be possible to infer from the brain which word a person is speaking or even thinking. While recent studies have demonstrated that brain signals can give detailed information about actual and imagined actions, such as different types of limb movements or spoken words, concrete experimental evidence for the possibility to “read the mind,” i.e., to interpret internally-generated speech, has been scarce. In this study, we found that it is possible to use signals recorded from the surface of the brain (electrocorticography (ECoG)) to discriminate the vowels and consonants embedded in spoken and in imagined words, and we defined the cortical areas that held the most information about discrimination of the vowels and consonants. The results shed light on the distinct mechanisms associated with production of vowels and consonants, and could provide the basis for brainbased communication using imagined speech.
Some neurons in auditory cortex respond to recent stimulus history by adapting their response functions to track stimulus statistics directly, as might be expected. In contrast, some neurons respond to loud sounds by adjusting their response functions away from high intensities and consequently remain sensitive to softer sounds. In marmoset monkey auditory cortex, the latter type of adaptation appears to exist only in neurons tuned to stimulus intensity.
The mammalian cerebral cortex consists of multiple areas specialized for processing information for many different sensory modalities. Although the basic structure is similar for each cortical area, specialized neural connections likely mediate unique information processing requirements. Relative to primary visual (V1) and somatosensory (S1) cortices, little is known about the intrinsic connectivity of primary auditory cortex (A1). To better understand the flow of information from the thalamus to and through rat A1, we made use of a rapid, high-throughput screening method exploiting laser-induced uncaging of glutamate to construct excitatory input maps of individual neurons. We found that excitatory inputs to layer 2/3 pyramidal neurons were similar to those in V1 and S1; these cells received strong excitation primarily from layers 2-4. Both anatomical and physiological observations, however, indicate that inputs and outputs of layer 4 excitatory neurons in A1 contrast with those in V1 and S1. Layer 2/3 pyramids in A1 have substantial axonal arbors in layer 4, and photostimulation demonstrates that these pyramids can connect to layer 4 excitatory neurons. Furthermore, most or all of these layer 4 excitatory neurons project out of the local cortical circuit. Unlike S1 and V1, where feedback to layer 4 is mediated exclusively by indirect local circuits involving layer 2/3 projections to deep layers and deep feedback to layer 4, layer 4 of A1 integrates thalamic and strong layer 4 recurrent excitatory input with relatively direct feedback from layer 2/3 and provides direct cortical output.
High-gamma band (>60Hz) power changes in cortical electrophysiology are a reliable indicator of focal, event-related cortical activity. In spite of discoveries of oscillatory subthreshold and synchronous suprathreshold activity at the cellular level, there is an increasingly popular view that high-gamma band amplitude changes recorded from cellular ensembles are the result of asynchronous firing activity that yields wideband and uniform power increases. Others have demonstrated independence of power changes in the low- and high-gamma bands, but to date, no studies have shown evidence of any such independence above 60Hz. Based on non-uniformities in time-frequency analyses of electrocorticographic (ECoG) signals, we hypothesized that induced high-gamma band (60-500Hz) power changes are more heterogeneous than currently understood. Using single-word repetition tasks in six human subjects, we showed that functional responsiveness of different ECoG high-gamma sub-bands can discriminate cognitive task (e.g., hearing, reading, speaking) and cortical locations. Power changes in these sub-bands of the high-gamma range are consistently present within single trials and have statistically different time courses within the trial structure. Moreover, when consolidated across all subjects within three task-relevant anatomic regions (sensorimotor, Broca's area, and superior temporal gyrus), these behavior- and location- dependent power changes evidenced nonuniform trends across the population. Taken together, the independence and nonuniformity of power changes across a broad range of frequencies suggest that a new approach to evaluating high-gamma band cortical activity is necessary. These findings show that in addition to time and location, frequency is another fundamental dimension of high-gamma dynamics.
Even simple sensory stimuli evoke neural responses that are dynamic and complex. Are the temporally patterned neural activities important for controlling the behavioral output? Here, we investigated this issue. Our results reveal that in the insect antennal lobe, due to circuit interactions, distinct neural ensembles are activated during and immediately following the termination of every odorant. Such non-overlapping response patterns are not observed even when the stimulus intensity or identities were changed. In addition, we find that ON and OFF ensemble neural activities differ in their ability to recruit recurrent inhibition, entrain field-potential oscillations and more importantly in their relevance to behaviour (initiate versus reset conditioned responses). Notably, we find that a strikingly similar strategy is also used for encoding sound onsets and offsets in the marmoset auditory cortex. In sum, our results suggest a general approach where recurrent inhibition is associated with stimulus ‘recognition' and ‘derecognition'.
Objectives Pure-tone audiometry has been a staple of hearing assessments for decades. Many different procedures have been proposed for measuring thresholds with pure tones by systematically manipulating intensity one frequency at a time until a discrete threshold function is determined. The authors have developed a novel nonparametric approach for estimating a continuous threshold audiogram using Bayesian estimation and machine learning classification. The objective of this study is to assess the accuracy and reliability of this new method relative to a commonly used threshold measurement technique. Design The authors performed air conduction pure-tone audiometry on 21 participants between the ages of 18 and 90 years with varying degrees of hearing ability. Two repetitions of automated machine learning audiogram estimation and 1 repetition of conventional modified Hughson-Westlake ascending-descending audiogram estimation were acquired by an audiologist. The estimated hearing thresholds of these two techniques were compared at standard audiogram frequencies (i.e., 0.25, 0.5, 1, 2, 4, 8 kHz). Results The two threshold estimate methods delivered very similar estimates at standard audiogram frequencies. Specifically, the mean absolute difference between estimates was 4.16 ± 3.76 dB HL. The mean absolute difference between repeated measurements of the new machine learning procedure was 4.51 ± 4.45 dB HL. These values compare favorably to those of other threshold audiogram estimation procedures. Furthermore, the machine learning method generated threshold estimates from significantly fewer samples than the modified Hughson-Westlake procedure while returning a continuous threshold estimate as a function of frequency. Conclusions The new machine learning audiogram estimation technique produces continuous threshold audiogram estimates accurately, reliably, and efficiently, making it a strong candidate for widespread application in clinical and research audiometry.
The acoustic features useful for converting auditory information into perceived objects are poorly understood. Although auditory cortex neurons have been described as being narrowly tuned and preferentially responsive to narrowband signals, naturally occurring sounds are generally wideband with unique spectral energy profiles. Through the use of parametric wideband acoustic stimuli, we found that such neurons in awake marmoset monkeys respond vigorously to wideband sounds having complex spectral shapes, preferring stimuli of either high or low spectral contrast. Low contrast-preferring neurons cannot be studied thoroughly with narrowband stimuli and have not been previously described. These findings indicate that spectral contrast reflects an important stimulus decomposition in auditory cortex and may contribute to the recognition of acoustic objects.
Several scientists have proposed different models for cortical processing of speech. Classically, the regions participating in language were thought to be modular with a linear sequence of activations. More recently, modern theoretical models have posited a more hierarchical and distributed interaction of anatomic areas for the various stages of speech processing. Traditional imaging techniques can only define the location or time of cortical activation, which impedes the further evaluation and refinement of these models. In this study, we take advantage of recordings from the surface of the brain [electrocorticography (ECoG)], which can accurately detect the location and timing of cortical activations, to study the time course of ECoG high gamma (HG) modulations during an overt and covert word repetition task for different cortical areas. For overt word production, our results show substantial perisylvian cortical activations early in the perceptual phase of the task that were maintained through word articulation. However, this broad activation is attenuated during the expressive phase of covert word repetition. Across the different repetition tasks, the utilization of the different cortical sites within the perisylvian region varied in the degree of activation dependent on which stimulus was provided (auditory or visual cue) and whether the word was to be spoken or imagined. Taken together, the data support current models of speech that have been previously described with functional imaging. Moreover, this study demonstrates that the broad perisylvian speech network activates early and maintains suprathreshold activation throughout the word repetition task that appears to be modulated by the demands of different conditions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.