2005
DOI: 10.1126/science.1117593
|View full text |Cite
|
Sign up to set email alerts
|

Fast Readout of Object Identity from Macaque Inferior Temporal Cortex

Abstract: Understanding the brain computations leading to object recognition requires quantitative characterization of the information represented in inferior temporal (IT) cortex. We used a biologically plausible, classifier-based readout technique to investigate the neural coding of selectivity and invariance at the IT population level. The activity of small neuronal populations (∼100 randomly selected cells) over very short time intervals (as small as 12.5 milliseconds) contained unexpectedly accurate and robust info… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

55
717
5
3

Year Published

2010
2010
2023
2023

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 754 publications
(793 citation statements)
references
References 32 publications
(40 reference statements)
55
717
5
3
Order By: Relevance
“…Neurons here respond to stimuli from large parts of the visual field, and they code for complex shapes similar to the face parts represented by the Assembly Layer (Tanaka 1996(Tanaka , 2003. The fact that information about object position and scale can be read out from IT neurons (Hung et al 2005), which disagrees with the assumptions made by pure pooling-models, points to the possibility of our control units residing there as well. Of course, in the cortex the mapping from V1 to IT does not happen directly, but via intermediate stages like V2 and V4.…”
Section: Discussioncontrasting
confidence: 42%
See 1 more Smart Citation
“…Neurons here respond to stimuli from large parts of the visual field, and they code for complex shapes similar to the face parts represented by the Assembly Layer (Tanaka 1996(Tanaka , 2003. The fact that information about object position and scale can be read out from IT neurons (Hung et al 2005), which disagrees with the assumptions made by pure pooling-models, points to the possibility of our control units residing there as well. Of course, in the cortex the mapping from V1 to IT does not happen directly, but via intermediate stages like V2 and V4.…”
Section: Discussioncontrasting
confidence: 42%
“…While this idea contradicts the textbook view that the dorsal stream takes care of object position and the ventral stream is only responsible for object identity, there is growing evidence that these roles are not as separate as previously thought. Hung et al (2005) have found that object position and scale can be read out from neurons in inferotemporal cortex (a high area of the ventral stream), while Konen and Kastner (2008) report representations of object identity in the dorsal stream.…”
Section: Physiological Background Of Dynamic Routingmentioning
confidence: 99%
“…In humans, multivariate pattern analysis (MVPA) of non‐invasive electrophysiological data has shown potential to achieve a similar level of sensitivity, demonstrating rapid categorization along the ventral stream (Cauchoix, Barragan‐Jason, Serre, & Barbeau, 2014; Isik, Meyers, Leibo, & Poggio, 2014; Ramkumar, Hansen, Pannasch, & Loschky, 2016). Fast decoding of object category was achieved at ∼100 ms from small neuronal populations in primates (Hung & Poggio, 2005) and from invasively recorded responses in human visual cortex (Li & Lu, 2009). Furthermore, recent applications of MVPA to electrophysiological data have resolved face identity processing to early latencies (50–70 ms after stimulus onset; Davidesco et al, 2014; Nemrodov et al, 2016; Vida, Nestor, Plaut, & Behrmann, 2017).…”
Section: Introductionmentioning
confidence: 99%
“…Many physiological studies assume that animals classify stimuli as we do (Hung et al 2005;Kreiman et al 2006) or explicitly train animals on categories chosen by experimenters (Freedman et al 2001). Finally, many investigations have focused on the visual system, using patterns that are relatively stable over time.…”
Section: Introductionmentioning
confidence: 99%