Prior work decoding linguistic meaning from imaging data has been largely limited to concrete nouns, using similar stimuli for training and testing, from a relatively small number of semantic categories. Here we present a new approach for building a brain decoding system in which words and sentences are represented as vectors in a semantic space constructed from massive text corpora. By efficiently sampling this space to select training stimuli shown to subjects, we maximize the ability to generalize to new meanings from limited imaging data. To validate this approach, we train the system on imaging data of individual concepts, and show it can decode semantic vector representations from imaging data of sentences about a wide variety of both concrete and abstract topics from two separate datasets. These decoded representations are sufficiently detailed to distinguish even semantically similar sentences, and to capture the similarity structure of meaning relationships between sentences.
Objects can be characterized according to a vast number of possible criteria (e.g. animacy, shape, color, function), but some dimensions are more useful than others for making sense of the objects around us. To identify these “core dimensions” of object representations, we developed a data-driven computational model of similarity judgments for real-world images of 1,854 objects. The model captured most explainable variance in similarity judgments and produced 49 highly reproducible and meaningful object dimensions that reflect various conceptual and perceptual properties of those objects. These dimensions predicted external categorization behavior and reflected typicality judgments of those categories. Further, humans can accurately rate objects along these dimensions, highlighting their interpretability and opening up a way to generate similarity estimates from object dimensions alone. Collectively, these results demonstrate that human similarity judgments can be captured by a fairly low-dimensional, interpretable embedding that generalizes to external behavior.
In this paper we carry out an extensive comparison of many off-the-shelf distributed semantic vectors representations of words, for the purpose of making predictions about behavioural results or human annotations of data. In doing this comparison we also provide a guide for how vector similarity computations can be used to make such predictions, and introduce many resources available both in terms of datasets and of vector representations. Finally, we discuss the shortcomings of this approach and future research directions that might address them.
Information mapping using pattern classifiers has become increasingly popular in recent years, although without a clear consensus on which classifier(s) ought to be used or how results should be tested. This paper addresses each of these questions, both analytically and through comparative analyses on five empirical datasets. We also describe how information maps in multiple class situations can provide information concerning the content of neural representations. Finally, we introduce a publically available software toolbox designed specifically for information mapping.
Brain functional connectivity (FC) changes have been measured across seconds using fMRI. This is true for both rest and task scenarios. Moreover, it is well accepted that task engagement alters FC, and that dynamic estimates of FC during and before task events can help predict their nature and performance. Yet, when it comes to dynamic FC (dFC) during rest, there is no consensus about its origin or significance. Some argue that rest dFC reflects fluctuations in ongoing cognition, or is a manifestation of intrinsic brain maintenance mechanisms, which could have predictive clinical value. Conversely, others have concluded that rest dFC is mostly the result of sampling variability, head motion or fluctuating sleep states. Here, we present novel analyses suggesting that rest dFC is influenced by short periods of spontaneous cognitive-task-like processes, and that the cognitive nature of such mental processes can be inferred blindly from the data. As such, several different behaviorally relevant whole-brain FC configurations may occur during a single rest scan even when subjects were continuously awake and displayed minimal motion. In addition, using low dimensional embeddings as visualization aids, we show how FC states-commonly used to summarize and interpret resting dFC-can accurately and robustly reveal periods of externally imposed tasks; however, they may be less effective in capturing periods of distinct cognition during rest.
Highlights d Diverse long-range inputs engage distinct GABAergic neurons in S1 d S2 inputs recruit PV neurons leading to feedforward inhibition of pyramidal cells in S1 d M1 inputs recruit VIP neurons leading to disinhibition of pyramidal cells in S1 d SST neurons receive relatively weak long-range inputs regardless of input area
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.