How language is encoded by neural activity in the higher-level language areas of humans is still largely unknown. We investigated whether the electrophysiological activity of Broca's area correlates with the sound of the utterances produced. During speech perception, the electric cortical activity of the auditory areas correlates with the sound envelope of the utterances. In our experiment, we compared the electrocorticogram recorded during awake neurosurgical operations in Broca's area and in the dominant temporal lobe with the sound envelope of single words versus sentences read aloud or mentally by the patients. Our results indicate that the electrocorticogram correlates with the sound envelope of the utterances, starting before any sound is produced and even in the absence of speech, when the patient is reading mentally. No correlations were found when the electrocorticogram was recorded in the superior parietal gyrus, an area not directly involved in language generation, or in Broca's area when the participants were executing a repetitive motor task, which did not include any linguistic content, with their dominant hand. The distribution of suprathreshold correlations across frequencies of cortical activities varied whether the sound envelope derived from words or sentences. Our results suggest the activity of language areas is organized by sound when language is generated before any utterance is produced or heard.A n important aspect of human language is speech production, although language may be generated independently from sound, as when one writes or thinks. However, introspection seems to suggest that our thoughts resound in our brain, much as if we were listening to an internal speech, yielding the impression/illusion that sound is inseparable from language.When human subjects listen to utterances, the neural activity in the superior temporal gyrus is modulated to track the envelope of the acoustic stimulus. The correlation between the power envelope of the speech and the neural activity is maximal at low frequencies (2-7 Hz, theta range) corresponding to syllable rates, and becomes less precise at higher frequencies (15-150 Hz, gamma range) corresponding to phoneme rates (1). Entrainment of neural activity to the speech envelope in auditory regions has allowed recognition of the phonetic features heard during speech perception (2-5), and even the reconstruction of simple words (6). This evidence indicates that during listening, speech representation in the auditory cortex and adjacent areas of the superior temporal lobe reflects acoustic features directly related to linguistically defined phonological entities such as phonemes and syllables. The relationship of specific patterns of sound amplitude and frequency to all similar patterns experienced over phylogeny and individual ontogeny is then responsible for sound perceptions (7). Moreover, as for subjects listening to natural speech, spatiotemporal features of the acoustic trace representative of the sound of the listened words have also been detec...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.