Auditory cortex on the exposed supratemporal plane in four anesthetized rhesus monkeys was mapped electrophysiologically with both pure-tone (PT) and broad-band complex sounds. The mapping confirmed the existence of at least three tonotopic areas. Primary auditory cortex, AI, was then aspirated, and the remainder of the cortex on the supratemporal plane was remapped. PT-responses in the caudomedial area, CM, were abolished in all animals but one, in which they were restricted to the high-frequency range. Some CM sites were still responsive to complex stimuli. In contrast to the effects on CM, no significant changes were detectable in the rostral area, R. After mapping cortex in four additional monkeys, injections were made with different tracers into matched best-frequency regions of AI, R, and CM. Injections in AI and R led to retrograde labeling of neurons in all three subdivisions of the medial geniculate (MG) nucleus (MGv, MGd, and MGm), as well as nuclei outside MG, whereas CM injections led to only sparse labeling of neurons in a restricted zone of the lateral MGd and, possibly, MGm, in addition to labeling in non-MG sites. The combined results suggest that MGv sends direct projections in parallel to areas AI and R, which drive PT-responses in both areas. PT-responses in area CM, however, appear to be driven by input relayed serially from AI. The direct input to CM from MGd and other thalamic nuclei may thus be capable of mediating responses only to broad-band sounds.
Interest in the processing of optic flow has increased recently in both the neurophysiological and the psychophysical communities. We have designed a neural network model of the visual motion pathway in higher mammals that detects the direction of heading from optic flow. The model is a neural implementation of the subspace algorithm introduced by Heeger and Jepson (1990). We have tested the network in simulations that are closely related to psychophysical and neurophysiological experiments and show that our results are consistent with recent data from both fields. The network reproduces some key properties of human ego-motion perception. At the same time, it produces neurons that are selective for different components of ego-motion flow fields, such as expansions and rotations. These properties are reminiscent of a subclass of neurons in cortical area MSTd, the triple-component neurons. We propose that the output of such neurons could be used to generate a computational map of heading directions in or beyond MST.
Auditory cortex on the exposed supratemporal plane in four anesthetized rhesus monkeys was mapped electrophysiologically with both pure-tone (PT) and broad-band complex sounds. The mapping confirmed the existence of at least three tonotopic areas. Primary auditory cortex, AI, was then aspirated, and the remainder of the cortex on the supratemporal plane was remapped. PT-responses in the caudomedial area, CM, were abolished in all animals but one, in which they were restricted to the high-frequency range. Some CM sites were still responsive to complex stimuli. In contrast to the effects on CM, no significant changes were detectable in the rostral area, R. After mapping cortex in four additional monkeys, injections were made with different tracers into matched best-frequency regions of AI, R, and CM. Injections in AI and R led to retrograde labeling of neurons in all three subdivisions of the medial geniculate (MG) nucleus (MGv, MGd, and MGm), as well as nuclei outside MG, whereas CM injections led to only sparse labeling of neurons in a restricted zone of the lateral MGd and, possibly, MGm, in addition to labeling in non-MG sites. The combined results suggest that MGv sends direct projections in parallel to areas AI and R, which drive PT-responses in both areas. PT-responses in area CM, however, appear to be driven by input relayed serially from AI. The direct input to CM from MGd and other thalamic nuclei may thus be capable of mediating responses only to broad-band sounds.
Human speech production requires the ability to couple motor actions with their auditory consequences. Nonhuman primates might not have speech because they lack this ability. To address this question, we trained macaques to perform an auditory–motor task producing sound sequences via hand presses on a newly designed device (“monkey piano”). Catch trials were interspersed to ascertain the monkeys were listening to the sounds they produced. Functional MRI was then used to map brain activity while the animals listened attentively to the sound sequences they had learned to produce and to two control sequences, which were either completely unfamiliar or familiar through passive exposure only. All sounds activated auditory midbrain and cortex, but listening to the sequences that were learned by self-production additionally activated the putamen and the hand and arm regions of motor cortex. These results indicate that, in principle, monkeys are capable of forming internal models linking sound perception and production in motor regions of the brain, so this ability is not special to speech in humans. However, the coupling of sounds and actions in nonhuman primates (and the availability of an internal model supporting it) seems not to extend to the upper vocal tract, that is, the supralaryngeal articulators, which are key for the production of speech sounds in humans. The origin of speech may have required the evolution of a “command apparatus” similar to the control of the hand, which was crucial for the evolution of tool use.
To understand auditory cortical processing, the effective connectivity between 15 auditory cortical regions and 360 cortical regions was measured in 171 Human Connectome Project participants, and complemented with functional connectivity and diffusion tractography. 1. A hierarchy of auditory cortical processing was identified from Core regions (including A1) to Belt regions LBelt, MBelt, and 52; then to PBelt; and then to HCP A4. 2. A4 has connectivity to anterior temporal lobe TA2, and to HCP A5, which connects to dorsal-bank superior temporal sulcus (STS) regions STGa, STSda, and STSdp. These STS regions also receive visual inputs about moving faces and objects, which are combined with auditory information to help implement multimodal object identification, such as who is speaking, and what is being said. Consistent with this being a “what” ventral auditory stream, these STS regions then have effective connectivity to TPOJ1, STV, PSL, TGv, TGd, and PGi, which are language-related semantic regions connecting to Broca’s area, especially BA45. 3. A4 and A5 also have effective connectivity to MT and MST, which connect to superior parietal regions forming a dorsal auditory “where” stream involved in actions in space. Connections of PBelt, A4, and A5 with BA44 may form a language-related dorsal stream.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.