Abstract. Over the past decade, functional Magnetic Resonance Imaging (fMRI) has emerged as a powerful new instrument to collect vast quantities of data about activity in the human brain. A typical fMRI experiment can produce a three-dimensional image related to the human subject's brain activity every half second, at a spatial resolution of a few millimeters. As in other modern empirical sciences, this new instrumentation has led to a flood of new data, and a corresponding need for new data analysis methods. We describe recent research applying machine learning methods to the problem of classifying the cognitive state of a human subject based on fRMI data observed over a single time interval. In particular, we present case studies in which we have successfully trained classifiers to distinguish cognitive states such as (1) whether the human subject is looking at a picture or a sentence, (2) whether the subject is reading an ambiguous or non-ambiguous sentence, and (3) whether the word the subject is viewing is a word describing food, people, buildings, etc. This learning problem provides an interesting case study of classifier learning from extremely high dimensional (10 5 features), extremely sparse (tens of training examples), noisy data. This paper summarizes the results obtained in these three case studies, as well as lessons learned about how to successfully apply machine learning methods to train classifiers in such settings.
This study triangulates executive planning and visuo-spatial reasoning in the context of the Tower of London (TOL) task by using a variety of methodological approaches. These approaches include functional magnetic resonance imaging (fMRI), functional connectivity analysis, individual difference analysis, and computational modeling. A graded fMRI paradigm compared the brain activation during the solution of problems with varying path lengths: easy (1 and 2 moves), moderate (3 and 4 moves) and difficult (5 and 6 moves). There were three central findings regarding the prefrontal cortex: (1) while both the left and right prefrontal cortices were equally involved during the solution of moderate and difficult problems, the activation on the right was differentially attenuated during the solution of the easy problems; (2) the activation observed in the right prefrontal cortex was highly correlated with individual differences in working memory (measured independently by the reading span task); and (3) different patterns of functional connectivity were observed in the left and right prefrontal cortices. Results obtained from the superior parietal region also revealed left/right differences; only the left superior parietal region revealed an effect of difficulty. These fMRI results converged upon two hypotheses: (1) the right prefrontal area may be more involved in the generation of a plan, whereas the left prefrontal area may be more involved in plan execution; and (2) the right superior parietal region is more involved in attention processes while the left homologue is more of a visuo-spatial workspace. A 4CAPS computational model of the cognitive processes and brain activation in the TOL task integrated these hypothesized mechanisms, and provided a reasonably good fit to the observed behavioral and brain activation data. The multiple research approaches presented here converge on a deepening understanding of the combination of perceptual and conceptual processes in this type of visual problem solving.
This study examined brain activation while participants read or listened to high-imagery sentences like The number eight when rotated 90 degrees looks like a pair of spectacles or low-imagery sentences, and judged them as true or false. The sentence imagery manipulation affected the activation in regions (particularly, the intraparietal sulcus) that activate in other mental imagery tasks, such as mental rotation. Both the auditory and visual presentation experiments indicated activation of the intraparietal sulcus area in the high-imagery condition, suggesting a common neural substrate for language-evoked imagery that is independent of the input modality. In addition to exhibiting greater activation levels during the processing of highimagery sentences, the left intraparietal sulcus also showed greater functional connectivity in this condition with other cortical regions, particularly language processing regions, regardless of the input modality. The comprehension of abstract, nonimaginal information in low-imagery sentences was accompanied by additional activation in regions in the left superior and middle temporal areas associated with the retrieval and processing of semantic and world knowledge. In addition to exhibiting greater activation levels during the processing of low-imagery sentences, this left temporal region also revealed greater functional connectivity in this condition with other left hemisphere language processing regions and with prefrontal regions, regardless of the input modality. The findings indicate that sentence comprehension can activate additional cortical regions that process information that is not specifically linguistic but varies with the information content of the sentence (such as visual or abstract information). In particular, the left intraparietal sulcus area appears to be centrally involved in processing the visual imagery that a sentence can evoke, while activating in synchrony with some core language processing regions. D
Abstract:Although there has been great interest in the neuroanatomical basis of reading, little attention has been focused on auditory language processing. The purpose of this study was to examine the differential neuroanatomical response to the auditory processing of real words and pseudowords. Eight healthy right-handed participants performed two phoneme monitoring tasks (one with real word stimuli and one with pseudowords) during a functional magnetic resonance imaging (fMRI) scan with a 4.1 T system. Both tasks activated the inferior frontal gyrus (IFG), the posterior superior temporal gyrus (pSTG) and the inferior parietal lobe (IPL). Pseudoword processing elicited significantly more activation within the posterior cortical regions compared with real word processing. Previous reading studies have suggested that this increase is due to an increased demand on the lexical access system. The left inferior frontal gyrus, on the other hand, did not reveal a significant difference in the amount of activation as a function of stimulus type. The lack of a differential response in IFG for auditory processing supports its hypothesized involvement in grapheme to phoneme conversion processes. These results are consistent with those from previous neuroimaging reading studies and emphasize the utility of examining both input modalities (e.g., visual or auditory) to compose a more complete picture of the language network.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.