fMRI (functional magnetic resonance imaging) studies on humans have shown a cortical area, the fusiform face area, that is specialized for face processing. An important question is how faces are represented within this area. This study provides direct evidence for a representation in which individual faces are encoded by their direction (facial identity) and distance (distinctiveness) from a prototypical (mean) face. When facial geometry (head shape, hair line, internal feature size and placement) was varied, the fMRI signal increased with increasing distance from the mean face. Furthermore, adaptation of the fMRI signal showed that the same neural population responds to faces falling along single identity axes within this space.
In most cases, aphasia is caused by strokes involving the left hemisphere, with more extensive damage typically being associated with more severe aphasia. The classical model of aphasia commonly adhered to in the Western world is the Wernicke-Lichtheim model. The model has been in existence for over a century, and classification of aphasic symptomatology continues to rely on it. However, far more detailed models of speech and language localization in the brain have been formulated. In this regard, the dual stream model of cortical brain organization proposed by Hickok and Poeppel is particularly influential. Their model describes two processing routes, a dorsal stream and a ventral stream, that roughly support speech production and speech comprehension, respectively, in normal subjects. Despite the strong influence of the dual stream model in current neuropsychological research, there has been relatively limited focus on explaining aphasic symptoms in the context of this model. Given that the dual stream model represents a more nuanced picture of cortical speech and language organization, cortical damage that causes aphasic impairment should map clearly onto the dual processing streams. Here, we present a follow-up study to our previous work that used lesion data to reveal the anatomical boundaries of the dorsal and ventral streams supporting speech and language processing. Specifically, by emphasizing clinical measures, we examine the effect of cortical damage and disconnection involving the dorsal and ventral streams on aphasic impairment. The results reveal that measures of motor speech impairment mostly involve damage to the dorsal stream, whereas measures of impaired speech comprehension are more strongly associated with ventral stream involvement. Equally important, many clinical tests that target behaviours such as naming, speech repetition, or grammatical processing rely on interactions between the two streams. This latter finding explains why patients with seemingly disparate lesion locations often experience similar impairments on given subtests. Namely, these individuals' cortical damage, although dissimilar, affects a broad cortical network that plays a role in carrying out a given speech or language task. The current data suggest this is a more accurate characterization than ascribing specific lesion locations as responsible for specific language deficits.awx363media15705668782001.
Language processing relies on a widespread network of brain regions. Univariate post-stroke lesion-behavior mapping is a particularly potent method to study brain-language relationships. However, it is a concern that this method may overlook structural disconnections to seemingly spared regions and may fail to adjudicate between regions that subserve different processes but share the same vascular perfusion bed. For these reasons, more refined structural brain mapping techniques may improve the accuracy of detecting brain networks supporting language. In this study, we applied a predictive multivariate framework to investigate the relationship between language deficits in human participants with chronic aphasia and the topological distribution of structural brain damage, defined as post-stroke necrosis or cortical disconnection. We analyzed lesion maps as well as structural connectome measures of whole-brain neural network integrity to predict clinically applicable language scores from the Western Aphasia Battery (WAB). Out-of-sample prediction accuracy was comparable for both types of analyses, which revealed spatially distinct, albeit overlapping, networks of cortical regions implicated in specific aspects of speech functioning. Importantly, all WAB scores could be predicted at better-than-chance level from the connections between gray-matter regions spared by the lesion. Connectome-based analysis highlighted the role of connectivity of the temporoparietal junction as a multimodal area crucial for language tasks. Our results support that connectome-based approaches are an important complement to necrotic lesion-based approaches and should be used in combination with lesion mapping to fully elucidate whether structurally damaged or structurally disconnected regions relate to aphasic impairment and its recovery.
Several dual route models of human speech processing have been proposed suggesting a large-scale anatomical division between cortical regions that support motor-phonological aspects vs. lexical-semantic aspects of speech processing. However, to date, there is no complete agreement on what areas subserve each route or the nature of interactions across these routes that enables human speech processing. Relying on an extensive behavioral and neuroimaging assessment of a large sample of stroke survivors, we used a data-driven approach using principal components analysis of lesionsymptom mapping to identify brain regions crucial for performance on clusters of behavioral tasks without a priori separation into task types. Distinct anatomical boundaries were revealed between a dorsal frontoparietal stream and a ventral temporal-frontal stream associated with separate components. Collapsing over the tasks primarily supported by these streams, we characterize the dorsal stream as a form-to-articulation pathway and the ventral stream as a form-to-meaning pathway. This characterization of the division in the data reflects both the overlap between tasks supported by the two streams as well as the observation that there is a bias for phonological production tasks supported by the dorsal stream and lexical-semantic comprehension tasks supported by the ventral stream. As such, our findings show a division between two processing routes that underlie human speech processing and provide an empirical foundation for studying potential computational differences that distinguish between the two routes.aphasia | speech production | speech comprehension | voxel-based lesionsymptom mapping | speech processing U nderstanding how and where in the brain speech processing occurs has been the focus of concerted scientific endeavor for over one and a half centuries. The most influential model of the neuroanatomical basis of speech processing was proposed by Wernicke (1) and later refined by Lichtheim (2)-the WernickeLichtheim (W-L) model. The W-L model includes two separate routes from a posterior auditory comprehension center to an anterior motor speech production center: a direct route that enables speech repetition and an indirect route via ideation that mediates comprehension and propositional speech. More recently, dual route processing has been recognized as a central principle in the functional organization of the brain. Ungerleider and Mishkin (3) proposed that visual perception in primates is supported by a ventral "what" stream (involving an occipital-temporal lobe route) and a dorsal "where" stream [or later, a "how" stream mediated by an occipital-parietal route (4)]. Similarly, in the auditory domain (5), Rauschecker and Tian (6) proposed a "dual stream" model to account for the identification of what was being heard and from where the sound originated (5, 6). This model, mostly derived from nonhuman primate data, distinguishes between an anterior/ ventral route ("what" stream) involving connections from the left posterior superio...
Previous research has shown that interacting with natural environments vs. more urban or built environments can have salubrious psychological effects, such as improvements in attention and memory. Even viewing pictures of nature vs. pictures of built environments can produce similar effects. A major question is: What is it about natural environments that produces these benefits? Problematically, there are many differing qualities between natural and urban environments, making it difficult to narrow down the dimensions of nature that may lead to these benefits. In this study, we set out to uncover visual features that related to individuals' perceptions of naturalness in images. We quantified naturalness in two ways: first, implicitly using a multidimensional scaling analysis and second, explicitly with direct naturalness ratings. Features that seemed most related to perceptions of naturalness were related to the density of contrast changes in the scene, the density of straight lines in the scene, the average color saturation in the scene and the average hue diversity in the scene. We then trained a machine-learning algorithm to predict whether a scene was perceived as being natural or not based on these low-level visual features and we could do so with 81% accuracy. As such we were able to reliably predict subjective perceptions of naturalness with objective low-level visual features. Our results can be used in future studies to determine if these features, which are related to naturalness, may also lead to the benefits attained from interacting with nature.
Previous research has shown that viewing images of nature scenes can have a beneficial effect on memory, attention, and mood. In this study, we aimed to determine whether the preference of natural versus man-made scenes is driven by bottom–up processing of the low-level visual features of nature. We used participants’ ratings of perceived naturalness as well as esthetic preference for 307 images with varied natural and urban content. We then quantified 10 low-level image features for each image (a combination of spatial and color properties). These features were used to predict esthetic preference in the images, as well as to decompose perceived naturalness to its predictable (modeled by the low-level visual features) and non-modeled aspects. Interactions of these separate aspects of naturalness with the time it took to make a preference judgment showed that naturalness based on low-level features related more to preference when the judgment was faster (bottom–up). On the other hand, perceived naturalness that was not modeled by low-level features was related more to preference when the judgment was slower. A quadratic discriminant classification analysis showed how relevant each aspect of naturalness (modeled and non-modeled) was to predicting preference ratings, as well as the image features on their own. Finally, we compared the effect of color-related and structure-related modeled naturalness, and the remaining unmodeled naturalness in predicting esthetic preference. In summary, bottom–up (color and spatial) properties of natural images captured by our features and the non-modeled naturalness are important to esthetic judgments of natural and man-made scenes, with each predicting unique variance.
Chronic aphasia is a common consequence of a left-hemisphere stroke. Since the early insights by Broca and Wernicke, studying the relationship between the loci of cortical damage and patterns of language impairment has been one of the concerns of aphasiology. We utilized multivariate classification in a cross-validation framework to predict the type of chronic aphasia from the spatial pattern of brain damage. Our sample consisted of 98 patients with five types of aphasia (Broca’s, Wernicke’s, global, conduction, and anomic), classified based on scores on the Western Aphasia Battery. Binary lesion maps were obtained from structural MRI scans (obtained at least 6 months poststroke, and within 2 days of behavioural assessment); after spatial normalization, the lesions were parcellated into a disjoint set of brain areas. The proportion of damage to the brain areas was used to classify patients’ aphasia type. To create this parcellation, we relied on five brain atlases; our classifier (support vector machine) could differentiate between different kinds of aphasia using any of the five parcellations. In our sample, the best classification accuracy was obtained when using a novel parcellation that combined two previously published brain atlases, with the first atlas providing the segmentation of grey matter, and the second atlas used to segment the white matter. For each aphasia type, we computed the relative importance of different brain areas for distinguishing it from other aphasia types; our findings were consistent with previously published reports of lesion locations implicated in different types of aphasia. Overall, our results revealed that automated multivariate classification could distinguish between aphasia types based on damage to atlas-defined brain areas.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.