As humans, we gather a wide range of information about other people from watching them move. A network of parietal, premotor, and occipitotemporal regions within the human brain, termed the action observation network (AON), has been implicated in understanding others' actions by means of an automatic matching process that links observed and performed actions. Current views of the AON assume a matching process biased towards familiar actions; specifically, those performed by conspecifics and present in the observer's motor repertoire. In this study, we test how this network responds to form and motion cues when observing natural human motion compared to rigid robotic-like motion across two independent functional neuroimaging experiments. In Experiment 1, we report the surprising finding that premotor, parietal, occipitotemporal regions respond more robustly to rigid, robot-like motion than natural human motion. In Experiment 2, we replicate and extend this finding by demonstrating that the same pattern of results emerges whether the agent is a human or a robot, which suggests the preferential response to robot-like motion is independent of the agent's form. These data challenge previous ideas about AON function by demonstrating that the core nodes of this network can be flexibly engaged by novel, unfamiliar actions performed by both human and non-human agents. As such, these findings suggest that the AON is sensitive to a broader range of action features beyond those that are simply familiar.
This research validated and extended the Movement Imagery QuestionnaireRevised (MIQ-R; Hall & Martin, 1997). Study 1 (N = 400) examined the MIQ-R's factor structure via multitrait-multimethod confirmatory factor analysis. The questionnaire was then modified in Study 2 (N = 370) to separately assess the ease of imaging external visual imagery and internal visual imagery, as well as kinesthetic imagery (termed the Movement Imagery Questionnaire-3; MIQ-3). Both Studies 1 and 2 found that a correlated-traits correlated-uniqueness model provided the best fit to the data, while displaying gender invariance and no significant differences in latent mean scores across gender. Study 3 (N = 97) demonstrated the MIQ-3's predictive validity revealing the relationships between imagery ability and observational learning use. Findings highlight the method effects that occur by assessing each type of imagery ability using the same four movements and demonstrate that better imagers report greater use of observational learning.
Spontaneous mimicry of other people's actions serves an important social function, enhancing affiliation and social interaction. This mimicry can be subtly modulated by different social contexts. We recently found behavioral evidence that direct eye gaze rapidly and specifically enhances mimicry of intransitive hand movements (Wang et al., 2011). Based on past findings linking medial prefrontal cortex (mPFC) to both eye contact and the control of mimicry, we hypothesized that mPFC might be the neural origin of this behavioral effect. The present study aimed to test this hypothesis. During functional magnetic resonance imaging (fMRI) scanning, 20 human participants performed a simple mimicry or no-mimicry task, as previously described (Wang et al., 2011), with direct gaze present on half of the trials. As predicted, fMRI results showed that performing the task activated mirror systems, while direct gaze and inhibition of the natural tendency to mimic both engaged mPFC. Critically, we found an interaction between mimicry and eye contact in mPFC, superior temporal sulcus (STS) and inferior frontal gyrus. We then used dynamic causal modeling to contrast 12 possible models of information processing in this network. Results supported a model in which eye contact controls mimicry by modulating the connection strength from mPFC to STS. This suggests that mPFC is the originator of the gaze-mimicry interaction and that it modulates sensory input to the mirror system. Thus, our results demonstrate how different components of the social brain work together to on-line control mimicry according to the social context.
A hallmark of human social interaction is the ability to consider other people's mental states, such as what they see, believe, or desire. Prior neuroimaging research has predominantly investigated the neural mechanisms involved in computing one's own or another person's perspective and largely ignored the question of perspective selection. That is, which brain regions are engaged in the process of selecting between self and other perspectives? To address this question, the current fMRI study used a behavioral paradigm that required participants to select between competing visual perspectives. We provide two main extensions to current knowledge. First, we demonstrate that brain regions within dorsolateral prefrontal and parietal cortices respond in a viewpoint-independent manner during the selection of task-relevant over task-irrelevant perspectives. More specifically, following the computation of two competing visual perspectives, common regions of frontoparietal cortex are engaged to select one's own viewpoint over another's as well as select another's viewpoint over one's own. Second, in the absence of conflict between the content of competing perspectives, we showed a reduced engagement of frontoparietal cortex when judging another's visual perspective relative to one's own. This latter finding provides the first brain-based evidence for the hypothesis that, in some situations, another person's perspective is automatically and effortlessly computed, and thus, less cognitive control is required to select it over one's own perspective. In doing so, we provide stronger evidence for the claim that we not only automatically compute what other people see but also, in some cases, we compute this even before we are explicitly aware of our own perspective.
Research in social neuroscience has primarily focused on carving up cognition into distinct pieces, as a function of mental process, neural network or social behaviour, while the need for unifying models that span multiple social phenomena has been relatively neglected. Here we present a novel framework that treats social cognition as a case of semantic cognition, which provides a neurobiologically constrained and generalizable framework, with clear, testable predictions regarding sociocognitive processing in the context of both health and disease. According to this framework, social cognition relies on two principal systems of representation and control. These systems are neuroanatomically and functionally distinct, but interact to (1) enable development of foundational, conceptual-level knowledge and (2) regulate access to this information in order to generate flexible and context-appropriate social behaviour. The Social Semantics framework shines new light on the mechanisms of social information processing by maintaining as much explanatory power as prior models of social cognition, whilst remaining simpler, by virtue of relying on fewer components that are "tuned" towards social interactions.
Abstract■ Humans automatically imitate other peopleʼs actions during social interactions, building rapport and social closeness in the process. Although the behavioral consequences and neural correlates of imitation have been studied extensively, little is known about the neural mechanisms that control imitative tendencies. For example, the degree to which an agent is perceived as human-like influences automatic imitation, but it is not known how perception of animacy influences brain circuits that control imitation. In the current fMRI study, we examined how the perception and belief of animacy influence the control of automatic imitation. Using an imitation-inhibition paradigm that involves suppressing the tendency to imitate an observed action, we manipulated both bottom-up (visual input) and top-down (belief ) cues to animacy. Results show divergent patterns of behavioral and neural responses. Behavioral analyses show that automatic imitation is equivalent when one or both cues to animacy are present but reduces when both are absent. By contrast, right TPJ showed sensitivity to the presence of both animacy cues. Thus, we demonstrate that right TPJ is biologically tuned to control imitative tendencies when the observed agent both looks like and is believed to be human. The results suggest that right TPJ may be involved in a specialized capacity to control automatic imitation of human agents, rather than a universal process of conflict management, which would be more consistent with generalist theories of imitative control. Evidence for specialized neural circuitry that "controls" imitation offers new insight into developmental disorders that involve atypical processing of social information, such as autism spectrum disorders. ■
Although robots are becoming an ever-growing presence in society, we do not hold the same expectations for robots as we do for humans, nor do we treat them the same. As such, the ability to recognize cues to human animacy is fundamental for guiding social interactions. We review literature that demonstrates cortical networks associated with person perception, action observation and mentalizing are sensitive to human animacy information. In addition, we show that most prior research has explored stimulus properties of artificial agents (humanness of appearance or motion), with less investigation into knowledge cues (whether an agent is believed to have human or artificial origins). Therefore, currently little is known about the relationship between stimulus and knowledge cues to human animacy in terms of cognitive and brain mechanisms. Using fMRI, an elaborate belief manipulation, and human and robot avatars, we found that knowledge cues to human animacy modulate engagement of person perception and mentalizing networks, while stimulus cues to human animacy had less impact on social brain networks. These findings demonstrate that self–other similarities are not only grounded in physical features but are also shaped by prior knowledge. More broadly, as artificial agents fulfil increasingly social roles, a challenge for roboticists will be to manage the impact of pre-conceived beliefs while optimizing human-like design.
Automatic imitation is a cornerstone of nonverbal communication that fosters rapport between interaction partners. Recent research has suggested that stable dimensions of personality are antecedents to automatic imitation, but the empirical evidence linking imitation with personality traits is restricted to a few studies with modest sample sizes. Additionally, atypical imitation has been documented in autism spectrum disorders and schizophrenia, but the mechanisms underpinning these behavioural profiles remain unclear. Using a larger sample than prior studies (N=243), the current study tested whether performance on a computer-based automatic imitation task could be predicted by personality traits associated with social behaviour (extraversion and agreeableness) and with disorders of social cognition (autistic-like and schizotypal traits). Further personality traits (narcissism and empathy) were assessed in a subsample of participants (N=57). Multiple regression analyses showed that personality measures did not predict automatic imitation. In addition, using a similar analytical approach to prior studies, no differences in imitation performance emerged when only the highest and lowest 20 participants on each trait variable were compared. These data weaken support for the view that stable personality traits are antecedents to automatic imitation and that neural mechanisms thought to support automatic imitation, such as the mirror neuron system, are dysfunctional in autism spectrum disorders or schizophrenia. In sum, the impact that personality variables have on automatic imitation is less universal than initial reports suggest.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.