2014
DOI: 10.1152/jn.00394.2013
|View full text |Cite
|
Sign up to set email alerts
|

The dynamics of invariant object recognition in the human visual system

Abstract: The human visual system can rapidly recognize objects despite transformations that alter their appearance. The precise timing of when the brain computes neural representations that are invariant to particular transformations, however, has not been mapped in humans. Here we employ magnetoencephalography decoding analysis to measure the dynamics of size- and position-invariant visual information development in the ventral visual stream. With this method we can read out the identity of objects beginning as early … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

25
289
6

Year Published

2015
2015
2020
2020

Publication Types

Select...
3
2
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 253 publications
(321 citation statements)
references
References 37 publications
(51 reference statements)
25
289
6
Order By: Relevance
“…We found that the time course rose sharply after image onset, reaching significance at 50 ms (45-52 ms) and a peak at 97 ms (94-102 ms). This indicates that single scene images were discriminated early by visual representations, similar to single images with other visual content (Thorpe et al, 1996;Carlson et al, 2013;Cichy et al, 2014;Isik et al, 2014), suggesting a common source in early visual areas (Cichy et al, 2014).…”
Section: Neural Representations Of Single Scene Images Emerged Early mentioning
confidence: 86%
See 1 more Smart Citation
“…We found that the time course rose sharply after image onset, reaching significance at 50 ms (45-52 ms) and a peak at 97 ms (94-102 ms). This indicates that single scene images were discriminated early by visual representations, similar to single images with other visual content (Thorpe et al, 1996;Carlson et al, 2013;Cichy et al, 2014;Isik et al, 2014), suggesting a common source in early visual areas (Cichy et al, 2014).…”
Section: Neural Representations Of Single Scene Images Emerged Early mentioning
confidence: 86%
“…Using multivariate pattern classification (Carlson et al, 2013;Cichy et al, 2014;Isik et al, 2014) and representational similarity analysis (Kriegeskorte, 2008;Kriegeskorte and Kievit, 2013;Cichy et al, 2014) on millisecond-resolved magnetoencephalography data (MEG), we identified a marker of scene size around 250 ms, preceded by and distinct from an early signal for lower-level visual analysis of scene images at ~100ms. Furthermore, we demonstrated that the scene size marker was independent of both low-level image features (i.e.…”
Section: The Temporal Dynamics Of Spatial Layout Processingmentioning
confidence: 99%
“…We recorded comprehensive brain activity with magnetoencephalography (MEG) in four adult human participants while they viewed face images from a large, carefully controlled set (91 face identities, with two facial expressions per identity; Fig. 1), with a sufficiently large number of trials for each face identity (104-112 trials per face identity, 9,464-10,192 trials per participant) to be able to evaluate the representation of individual face identities in each participant (26). We used MEG because it has excellent temporal resolution and sufficient spatial resolution for decoding of fine visual information from spatial patterns of neural activity (26,27).…”
Section: Significancementioning
confidence: 99%
“…1), with a sufficiently large number of trials for each face identity (104-112 trials per face identity, 9,464-10,192 trials per participant) to be able to evaluate the representation of individual face identities in each participant (26). We used MEG because it has excellent temporal resolution and sufficient spatial resolution for decoding of fine visual information from spatial patterns of neural activity (26,27). In each participant, we used an independent functional localizer task in MEG to identify face-selective regions in right lateral occipital cortex and right fusiform gyrus.…”
Section: Significancementioning
confidence: 99%
“…These feature differences are reflected in behavioral measures of perceptual similarity, such that within-and between-category perceptual similarity can be used to accurately predict the time it takes observers to categorize an object as animate or inanimate (Mohan and Arun, 2012). These perceptual differences likely contribute to MEG animacy decoding considering that object shape and perceptual similarity can be reliably decoded from MEG and EEG patterns (Isik et al, 2014;Coggan et al, 2016;Wardle et al, 2016). Furthermore, MEG animacy decoding strength is closely related, at the exemplar level, to categorization reaction time (Ritchie et al, 2015), likely reflecting the exemplar's perceptual typicality of the category it belongs to (Mohan and Arun, 2012).…”
Section: Introductionmentioning
confidence: 99%