The human brain is specialized for face processing, yet we sometimes perceive illusory faces in objects. It is unknown whether these natural errors of face detection originate from a rapid process based on visual features or from a slower, cognitive re-interpretation. Here we use a multifaceted approach to understand both the spatial distribution and temporal dynamics of illusory face representation in the brain by combining functional magnetic resonance imaging and magnetoencephalography neuroimaging data with model-based analysis. We find that the representation of illusory faces is confined to occipital-temporal face-selective visual cortex. The temporal dynamics reveal a striking evolution in how illusory faces are represented relative to human faces and matched objects. Illusory faces are initially represented more similarly to real faces than matched objects are, but within ~250 ms, the representation transforms, and they become equivalent to ordinary objects. This is consistent with the initial recruitment of a broadly-tuned face detection mechanism which privileges sensitivity over selectivity.
Understanding object representations requires a broad, comprehensive sampling of the objects in our visual world with dense measurements of brain activity and behavior. Here we present THINGS-data, a multimodal collection of large-scale neuroimaging and behavioral datasets in humans, comprising densely-sampled functional MRI and magnetoencephalographic recordings, as well as 4.70 million similarity judgments in response to thousands of photographic images for up to 1,854 object concepts. THINGS-data is unique in its breadth of richly-annotated objects, allowing for testing countless hypotheses at scale while assessing the reproducibility of previous findings. Beyond the unique insights promised by each individual dataset, the multimodality of THINGS-data allows combining datasets for a much broader view into object processing than previously possible. Our analyses demonstrate the high quality of the datasets and provide five examples of hypothesis-driven and data-driven applications. THINGS-data constitutes the core public release of the THINGS initiative (https://things-initiative.org) for bridging the gap between disciplines and the advancement of cognitive neuroscience.
Numerical format describes the way magnitude is conveyed, for example, as a digit ("3") or Roman numeral ("III"). In the field of numerical cognition, there is an ongoing debate of whether magnitude representation is independent of numerical format. Here, we examine the time course of magnitude processing when using different symbolic formats. We presented participants with a series of digits and dice patterns corresponding to the magnitudes of 1 to 6 while performing a 1-back task on magnitude. Magnetoencephalography offers an opportunity to record brain activity with high temporal resolution. Multivariate pattern analysis applied to magnetoencephalographic data allows us to draw conclusions about brain activation patterns underlying information processing over time. The results show that we can cross-decode magnitude when training the classifier on magnitude presented in one symbolic format and testing the classifier on the other symbolic format. This suggests a similar representation of these numerical symbols. In addition, results from a time generalization analysis show that digits were accessed slightly earlier than dice, demonstrating temporal asynchronies in their shared representation of magnitude. Together, our methods allow a distinction between format-specific signals and format-independent representations of magnitude showing evidence that there is a shared representation of magnitude accessed via different symbols.
1Colour is a defining feature of many objects, playing a crucial role in our ability to 2 rapidly recognise things in the world around us and make categorical distinctions. For example, 3 colour is a useful cue when distinguishing lemons from limes or blackberries from raspberries. 4That means our representation of many objects includes key colour-related information. The 5 question addressed here is whether the neural representation activated by knowing that 6 something is red is the same as that activated when we actually see something red, particularly 7 in regard to timing. We addressed this question using neural timeseries 8 (magnetoencephalography, MEG) data to contrast real colour perception and implied object 9 colour activation. We applied multivariate pattern analysis (MVPA) to analyse the brain 10 activation patterns evoked by colour accessed via real colour perception and implied colour 11 activation. Applying MVPA to MEG data allows us here to focus on the temporal dynamics of 12 these processes. Male and female human participants (N=18) viewed isoluminant red and green 13 shapes and grey-scale, luminance-matched pictures of fruits and vegetables that are red (e.g., 14 tomato) or green (e.g., kiwifruit) in nature. We show that the brain activation pattern evoked 15 by real colour perception is similar to implied colour activation, but that this pattern is 16 instantiated at a later time. These results suggest that a common colour representation can be 17 triggered by activating object representations from memory and perceiving colours.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.