Direct recordings in monkeys have demonstrated that neurons in frontal and parietal areas discharge during execution and perception of actions [1-8]. Because these discharges "reflect" the perceptual aspects of actions of others onto the motor repertoire of the perceiver, these cells have been called mirror neurons. Their overlapping sensory-motor representations have been implicated in observational learning and imitation, two important forms of learning [9]. In humans, indirect measures of neural activity support the existence of sensory-motor mirroring mechanisms in homolog frontal and parietal areas [10, 11], other motor regions [12-15], and also the existence of multisensory mirroring mechanisms in nonmotor regions [16-19]. We recorded extracellular activity from 1177 cells in human medial frontal and temporal cortices while patients executed or observed hand grasping actions and facial emotional expressions. A significant proportion of neurons in supplementary motor area, and hippocampus and environs, responded to both observation and execution of these actions. A subset of these neurons demonstrated excitation during action-execution and inhibition during action-observation. These findings suggest that multiple systems in humans may be endowed with neural mechanisms of mirroring for both the integration and differentiation of perceptual and motor aspects of actions performed by self and others.
Self-recognition has been demonstrated by a select number of primate species and is often used as an index of self-awareness. Whether a specialized neural mechanism for self-face recognition in humans exists remains unclear. We used event-related fMRI to investigate brain regions selectively activated by images of one's own face. Ten righthanded normal subjects viewed digital morphs between their own face and a gender-matched familiar other presented in a random sequence. Subjects were instructed to press a button with the right hand if the image looked like their own face, and another button if it looked like a familiar or scrambled face. Contrasting the trials in which images contain more bselfQ with those containing more familiar botherQ revealed signal changes in the right hemisphere (RH) including the inferior parietal lobule, inferior frontal gyrus, and inferior occipital gyrus. The opposite contrast revealed voxels with higher signal intensity for images of botherQ than for bselfQ in the medial prefrontal cortex and precuneus. Additional contrasts against baseline revealed that activity in the bselfQ minus botherQ contrasts represent signal increases compared to baseline (null events) in bselfQ trials, while activity in the botherQ minus bselfQ contrasts represent deactivations relative to baseline during bselfQ trials. Thus, a unique network involving frontoparietal structures described as part of the bmirror neuron systemQ in the RH underlies self-face recognition, while regions comprising the bdefault/resting stateQ network deactivate less for familiar others. We provide a model that reconciles these findings and previously published work to account for the modulations in these two networks previously implicated in social cognition. D 2004 Elsevier Inc. All rights reserved.
What are the neural correlates of insight solutions? To explore this question we asked participants to perform an anagram task while in the fMRI scanner. Previous research indicates that anagrams are unique in that they can yield both insight and search solutions in expert subjects. Using a single-trial fMRI paradigm, we utilized the anagram methodology to explore the neural correlates of insight versus search solutions. We used both reaction time measures and subjective reports to classify each trial as a search or insight solution. Data indicate that verbal insight solutions activate a distributed neural network that includes bilateral activation in the insula, the right prefrontal cortex, and the anterior cingulate. These areas are discussed with their possible role in evaluation and metacognition of insight solutions, as well as attention and monitoring during insight.
Here we highlight an emerging trend in the use of machine learning classifiers to test for abstraction across patterns of neural activity. When a classifier algorithm is trained on data from one cognitive context, and tested on data from another, conclusions can be drawn about the role of a given brain region in representing information that abstracts across those cognitive contexts. We call this kind of analysis Multivariate Cross-Classification (MVCC), and review several domains where it has recently made an impact. MVCC has been important in establishing correspondences among neural patterns across cognitive domains, including motor-perception matching and cross-sensory matching. It has been used to test for similarity between neural patterns evoked by perception and those generated from memory. Other work has used MVCC to investigate the similarity of representations for semantic categories across different kinds of stimulus presentation, and in the presence of different cognitive demands. We use these examples to demonstrate the power of MVCC as a tool for investigating neural abstraction and discuss some important methodological issues related to its application.
The common loon (Gavia immer) is a high‐trophic‐level, long‐lived, obligate piscivore at risk from elevated levels of Hg through biomagnification and bioaccumulation. From 1991 to 1996 feather (n = 455) and blood (n = 381) samples from adult loons were collected between June and September in five regions of North America: Alaska, northwestern United States, Upper Great Lakes, New England, and the Canadian Maritimes. Concentrations of Hg in adults ranged from 2.8 to 36.7 μg/g (fresh weight) in feathers and from 0.12 to 7.80 μg/g (wet weight) in whole blood. Blood Hg concentrations in 3 to 6‐week‐old juveniles ranged from 0.03 to 0.78 μg/g (wet weight) (n = 183). To better interpret exposure data, relationships between blood and feather Hg concentrations were examined among age and sex classes. Blood and feather Hg concentrations from the same individuals were significantly correlated and varied geographically (r2 ranged from 0.03 to 0.48). Blood and feather Hg correlated strongest in areas with the highest blood Hg levels, indicating a possible carryover of breeding season Hg that is depurated during winter remigial molt. Mean blood and feather Hg concentrations in males were significantly higher than concentrations in females for each region. The mean blood Hg concentration in adults was 10 times higher than that in juveniles, and feather Hg concentrations significantly increased over 1 to 4‐year periods in recaptured individuals. Geographic stratification indicates a significant increasing regional trend in adult and juvenile blood Hg concentrations from west to east. This gradient resembles U.S. Environmental Protection Agency‐modeled predictions of total anthropogenic Hg deposition across the United States. This gradient is clearest across regions. Within‐region blood Hg concentrations in adults and juveniles across nine sites of one region, the Upper Great Lakes, were less influenced by variations in geographic Hg deposition than by hydrology and lake chemistry. Loons breeding on low‐pH lakes in the Upper Great Lakes and in all lake types of northeastern North America are most at risk from Hg.
We have previously shown that a right inferior frontal mirror neuron area for grasping responds differently to observed grasping actions embedded in contexts that suggest different intentions, such as drinking and cleaning (Iacoboni, Molnar-Szakacs, Gallese, Buccino, Mazziotta, & Rizzolatti, 2005). Information about intentions, however, may be conveyed also by the grasping action itself: for instance, people typically drink by grasping the handle of a cup with a precision grip. In this fMRI experiment, subjects watched precision grips and whole-hand prehensions embedded in a drinking or an eating context. Indeed, in the right inferior frontal mirror neuron area there was higher activity for observed precision grips in the drinking context. Signal changes in the right inferior frontal mirror neuron area were also significantly correlated with scores on Empathic Concern subscale of the Interpersonal Reactivity Index, a measure of emotional empathy. These data suggest that human mirror neuron areas use both contextual and grasping type information to predict the intentions of others. They also suggest that mirror neuron activity is strongly linked to social competence.
There is increasing evidence to suggest that primary sensory cortices can become active in the absence of external stimulation in their respective modalities. This occurs, for example, when stimuli processed via one sensory modality imply features characteristic of a different modality; for instance, visual stimuli that imply touch have been observed to activate the primary somatosensory cortex (SI). In the present study, we addressed the question of whether such cross-modal activations are content specific. To this end, we investigated neural activity in the primary somatosensory cortex of subjects who observed human hands engaged in the haptic exploration of different everyday objects. Using multivariate pattern analysis of functional magnetic resonance imaging data, we were able to predict, based exclusively on the activity pattern in SI, which of several objects a subject saw being explored. Along with previous studies that found similar evidence for other modalities, our results suggest that primary sensory cortices represent information relevant for their modality even when this information enters the brain via a different sensory system.
There is evidence that the right hemisphere is involved in processing self-related stimuli. Previous brain imaging research has found a network of right-lateralized brain regions that preferentially respond to seeing one's own face rather than a familiar other. Given that the self is an abstract multimodal concept, we tested whether these brain regions would also discriminate the sound of one's own voice compared to a friend's voice. Participants were shown photographs of their own face and friend's face, and also listened to recordings of their own voice and a friend's voice during fMRI scanning. Consistent with previous studies, seeing one's own face activated regions in the inferior frontal gyrus (IFG), inferior parietal lobe and inferior occipital cortex in the right hemisphere. In addition, listening to one's voice also showed increased activity in the right IFG. These data suggest that the right IFG is concerned with processing self-related stimuli across multiple sensory modalities and that it may contribute to an abstract self-representation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.