Creativity, a multifaceted construct, can be studied in various ways, for example, investigating phases of the creative process, quality of the creative product, or the impact of expertise. Previous neuroimaging studies have assessed these individually. Believing that each of these interacting features must be examined simultaneously to develop a comprehensive understanding of creative behavior, we examined poetry composition, assessing process, product, and expertise in a single experiment. Distinct activation patterns were associated with generation and revision, two major phases of the creative process. Medial prefrontal cortex (MPFC) was active during both phases, yet responses in dorsolateral prefrontal and parietal executive systems (DLPFC/IPS) were phase‐dependent, indicating that while motivation remains unchanged, cognitive control is attenuated during generation and re‐engaged during revision. Experts showed significantly stronger deactivation of DLPFC/IPS during generation, suggesting that they may more effectively suspend cognitive control. Importantly however, similar overall patterns were observed in both groups, indicating the same cognitive resources are available to experts and novices alike. Quality of poetry, assessed by an independent panel, was associated with divergent connectivity patterns in experts and novices, centered upon MPFC (for technical facility) and DLPFC/IPS (for innovation), suggesting a mechanism by which experts produce higher quality poetry. Crucially, each of these three key features can be understood in the context of a single neurocognitive model characterized by dynamic interactions between medial prefrontal areas regulating motivation, dorsolateral prefrontal, and parietal areas regulating cognitive control and the association of these regions with language, sensorimotor, limbic, and subcortical areas distributed throughout the brain. Hum Brain Mapp 36:3351–3372, 2015. © 2015 The Authors. Human Brain Mapping Published by Wiley Periodicals, Inc..
Perspective-taking refers to the ability to recognize another person's point of view. Crucial to the development of interpersonal relationships and prosocial behavior, perspective-taking is closely linked to human empathy, and like empathy, perspective-taking is commonly subdivided into cognitive and affective components. While the two components of empathy have been frequently compared, the differences between cognitive and affective perspective-taking have been under-investigated in the cognitive neuroscience literature to date. Here, we define cognitive perspective-taking as the ability to infer an agent's thoughts or beliefs, and affective perspective-taking as the ability to infer an agent's feelings or emotions. In this paper, we review data from functional imaging studies in healthy adults as well as behavioral and structural imaging studies in patients with behavioral variant frontotemporal dementia in order to determine if there are distinct neural correlates for cognitive and affective perspective-taking. Data suggest that there are both shared and non-shared cognitive and anatomic substrates. For example, while both types of perspective-taking engage regions such as the temporoparietal junction, precuneus, and temporal poles, only affective perspective-taking engages regions within the limbic system and basal ganglia. Differences are also observed in prefrontal cortex: while affective perspective-taking engages ventromedial prefrontal cortex, cognitive perspective-taking engages dorsomedial prefrontal cortex and dorsolateral prefrontal cortex (DLPFC). To corroborate these findings, we also examine if cognitive and affective perspective-taking share the same relationship with executive functions. While it is clear that affective perspective-taking requires emotional substrates that are less prominent in cognitive perspective-taking, it remains unknown to what extent executive functions (including working memory, mental set switching, and inhibitory control) may contribute to each process. Overall results indicate that cognitive perspective-taking is dependent on executive functioning (particularly mental set switching), while affective perspective-taking is less so. We conclude with a critique of the current literature, with a focus on the different outcome measures used across studies and misconceptions due to imprecise terminology, as well as recommendations for future research.
Scopolamine (hyoscine) is a muscarinic acetylcholine receptor antagonist that has traditionally been used to treat motion sickness in humans. However, studies investigating depressed and bipolar populations have found that scopolamine is also effective at reducing depression and anxiety symptoms. The potential anxiety-reducing (anxiolytic) effects of scopolamine could have great clinical implications for humans; however, rats and mice administered scopolamine showed increased anxiety in standard behavioural tests. This is in direct contrast to findings in humans, and complicates studies to elucidate the specific mechanisms of scopolamine action. The aim of this study was to assess the suitability of zebrafish as a model system to test anxiety-like compounds using scopolamine. Similar to humans, scopolamine acted as an anxiolytic in individual behavioural tests (novel approach test and novel tank diving test). The anxiolytic effect of scopolamine was dose dependent and biphasic, reaching maximum effect at 800 µM. Scopolamine (800 µM) also had an anxiolytic effect in a group behavioural test, as it significantly decreased their tendency to shoal. These results establish zebrafish as a model organism for studying the anxiolytic effects of scopolamine, its mechanisms of action and side effects.
Hand gestures and speech form a single integrated system of meaning during language comprehension, but is gesture processed with speech in a unique fashion? We had subjects watch multimodal videos that presented auditory (words) and visual (gestures and actions on objects) information. Half of the subjects related the audio information to a written prime presented before the video, and the other half related the visual information to the written prime. For half of the multimodal video stimuli, the audio and visual information contents were congruent, and for the other half, they were incongruent. For all subjects, stimuli in which the gestures and actions were incongruent with the speech produced more errors and longer response times than did stimuli that were congruent, but this effect was less prominent for speech-action stimuli than for speech-gesture stimuli. However, subjects focusing on visual targets were more accurate when processing actions than gestures. These results suggest that although actions may be easier to process than gestures, gestures may be more tightly tied to the processing of accompanying speech.
Background/Study Context In a variety of collaborative circumstances, participants must adopt the perspective of a partner and establish a shared mental representation that helps mediate common understanding. This process is referred to as social coordination. Here, the authors investigate the effect of aging on social coordination and consider separately the component processes related to perspective-taking and working memory. Methods Twelve young adults and 14 older adults completed an experimental, language-based coordination task. Subjects were asked to describe a scene with sufficient detail so that a conversational partner could identify a target object in the context of other, competing objects that shared a variable number of features. Trials varied in the information available to the partner (perspective-taking demand) and in the number of competing objects present in the scene (working memory demand). Responses were scored according to adjective use. Results Results indicated that social coordination performance decreases with age. Whereas young adults performed close to ceiling, older adults were only precise in 49.70% of trials. In analyses examining perspective-taking conditions with no competitors, older adults were consistently impaired relative to young adults; in analyses examining the number of competitors during the simplest perspective-taking condition, both older and younger adults became more impaired with increasing numbers of competitors. Conclusion The experimental data suggest that social coordination decreases with age, which may affect communicative efficacy. Older adults’ tendency to provide insufficient responses suggests a limitation in perspective-taking, and the pattern of decline in common ground performance with increasing competitors suggests that this is independent of working memory decline. In sum, our results suggest that social coordination deficits in aging may be multifactorial.
For social interactions to be successful, individuals must establish shared mental representations that allow them to reach a common understanding and “get on the same page”. We refer to this process as social coordination. While examples of social coordination are ubiquitous in daily life, relatively little is known about the neuroanatomic basis of this complex behavior. This is particularly true in a language context, as previous studies have used overly complex paradigms to study this. Although traditional views of language processing and the recent interactive-alignment account of conversation focus on peri-Sylvian regions, our model of social coordination predicts prefrontal involvement. To test this hypothesis, we examine the neural basis of social coordination during conversational exchanges in non-aphasic patients with behavioral variant frontotemporal degeneration (bvFTD). bvFTD patients show impairments in executive function and social comportment due to disease in frontal and anterior temporal regions. To investigate social coordination in bvFTD, we developed a novel language-based task that assesses patients’ ability to convey an object’s description to a conversational partner. Experimental conditions manipulated the amount of information shared by the participant and the conversational partner, and the associated working memory demands. Our results indicate that, although patients did not have difficulty identifying the features of the objects, they did produce descriptions that included insufficient or inappropriate adjectives and thus struggled to communicate effectively. Impaired performance was related to gray matter atrophy particularly in medial prefrontal and orbitofrontal cortices. Our findings suggest an important role for non-language brain areas that belong to a large-scale neurocognitive network for social coordination.
While we may fail to recognize it, we use gestures constantly to convey and extract meaning. The variety of gestures we use on a daily basis also goes somewhat unnoticed. Some gestures are idiosyncratic, while others are more conventionalized. Some require the co-presence of speech to be interpretable, while others can stand alone. Although researchers have begun to focus on the characteristics of different gesture types, the field still lacks a consistent nomenclature system. Types of gestures overlap, subgroups are combined, and definitions vary slightly, all depending on who is doing the labeling. Of course, this makes it difficult to formally conceptualize the nature of gestural communication and to compare findings across studies conducted by different research groups. Figure 1 below illustrates the wide range of gestures that have been individually defined. Efforts have been made to develop a more systematic method for categorizing gesture types. The simplest of these schemes may be the one McNeill [2] termed "Kendon's continuum." According to this scheme, hand movements progress in the following linear sequence: gesticulations → speech-framed gestures → pantomimes → emblems → sign languages Moving from left to right along the continuum, the necessity for concurrent speech disappears and the presence of language-like properties increases. At the left extreme of the spectrum, gesticulations are defined as spontaneous and idiosyncratic movements of the hands and arms that rarely occur independent of speech (in fact, these gestures are temporally synchronized with the speech they accompany ninety percent of the time). Within this category, McNeill distinguishes between iconics, metaphorics, deictics, and beats. He explains that each gesture type performs a different function within discourse: iconic gestures refer to concrete events or features of a scene, metaphoric gestures to abstract concepts or relationships, deictic gestures to locations and orientations, and beat gestures to thematic highlights (see [2] for more information). The majority of research, including the next sections of this chapter, focuses on these subcategories of gesticulations. See Figure 1 below for definitions and examples of speech-framed gestures, pantomimes, emblems, and sign languages. Regardless of type, gesture production can be defined in three stages: preparation, stroke, and retraction. The stroke of the gesture contains the content of the message. Gestures are generally performed in the front of the body; McNeill writes that "the gesture space can be visualized as a shallow disk in front of the speaker, the bottom half flattened when the speaker is seated" ([2], p.86). 3. Competing theories While there is a general consensus that gestures are used to communicate, the exact nature of the relationship between gesture and speech is still a matter of some controversy. David McNeill [2] was first to propose that, at their core, gesture and speech reflect the same cognitive process: only the modality of expression differs. Others,...
The primary function of language is to communicate—that is, to make individuals reach a state of mutual understanding about a particular thought or idea. Accordingly, daily communication is truly a task of social coordination. Indeed, successful interactions require individuals to (1) track and adopt a partner’s perspective and (2) continuously shift between the numerous elements relevant to the exchange. Here, we use a referential communication task to study the contributions of perspective taking and executive function to effective communication in nonaphasic human patients with behavioral variant frontotemporal dementia (bvFTD). Similar to previous work, the task was to identify a target object, embedded among an array of competitors, for an interlocutor. Results indicate that bvFTD patients are impaired relative to control subjects in selecting the optimal, precise response. Neuropsychological testing related this performance to mental set shifting, but not to working memory or inhibition. Follow-up analyses indicated that some bvFTD patients perform equally well as control subjects, while a second, clinically matched patient group performs significantly worse. Importantly, the neuropsychological profiles of these subgroups differed only in set shifting. Finally, structural MRI analyses related patient impairment to gray matter disease in orbitofrontal, medial prefrontal, and dorsolateral prefrontal cortex, all regions previously implicated in social cognition and overlapping those related to set shifting. Complementary white matter analyses implicated uncinate fasciculus, which carries projections between orbitofrontal and temporal cortices. Together, these findings demonstrate that impaired referential communication in bvFTD is cognitively related to set shifting, and anatomically related to a social-executive network including prefrontal cortices and uncinate fasciculus.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.