2008
DOI: 10.1038/nature06713
|View full text |Cite
|
Sign up to set email alerts
|

Identifying natural images from human brain activity

Abstract: A challenging goal in neuroscience is to be able to read out, or decode, mental content from brain activity. Recent functional magnetic resonance imaging (fMRI) studies have decoded orientation 1,2 , position 3 , and object category 4,5 from activity in visual cortex. However, these studies typically used relatively simple stimuli (e.g. gratings) or images drawn from fixed categories (e.g. faces, houses), and decoding was based on prior measurements of brain activity evoked by those same stimuli or categories.… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

29
1,379
5
3

Year Published

2010
2010
2018
2018

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 1,164 publications
(1,416 citation statements)
references
References 30 publications
29
1,379
5
3
Order By: Relevance
“…Our approach is analogous in some ways to research that focuses on lower-level visual features of picture stimuli to analyze fMRI activation associated with viewing the picture (O'Toole et al, 2005;Hardoon et al, 2007;Kay et al, 2008). A similar generative classifier is used by Kay et al (2008) where they estimate a receptive-field model for each voxel and classify an activation pattern in terms of its similarity to the predicted brain activity.…”
Section: Classifier Modelmentioning
confidence: 99%
See 1 more Smart Citation
“…Our approach is analogous in some ways to research that focuses on lower-level visual features of picture stimuli to analyze fMRI activation associated with viewing the picture (O'Toole et al, 2005;Hardoon et al, 2007;Kay et al, 2008). A similar generative classifier is used by Kay et al (2008) where they estimate a receptive-field model for each voxel and classify an activation pattern in terms of its similarity to the predicted brain activity.…”
Section: Classifier Modelmentioning
confidence: 99%
“…A similar generative classifier is used by Kay et al (2008) where they estimate a receptive-field model for each voxel and classify an activation pattern in terms of its similarity to the predicted brain activity. Our work differs from these efforts, in that we focus on encodings of more abstract semantic features signified by words and predict brain activity based on these semantic features, rather than on visual features that encode visual properties.…”
Section: Classifier Modelmentioning
confidence: 99%
“…This analysis rests on the assumption that the distribution of feature-selective neurons-in this case the distribution of orientation-selective columns-is not uniform across a given visual area (Boynton 2005a;Kamitani and Tong 2005;Swisher et al 2010). Due to this nonuniform distribution of neural selectivity, a given voxel may contain more neurons tuned to one particular orientation, giving rise to a Gaussian-shaped response profile across orientations, which we refer to as the VTF (Serences et al 2009; see also Kamitani and Tong 2005;Kay et al 2008;Miyawaki et al 2008).…”
Section: Analysis Of Feature-selective Vtfs In Each Roimentioning
confidence: 99%
“…A quantitative model was then used to estimate the value of each alternative on a trial-by-trial basis, and fMRI-based voxel tuning functions (or VTFs; Kay et al 2008;Serences et al 2009) were used to estimate the influence of value on the shape of population response profiles in early areas of visual cortex. The data suggest that response profiles in early visual cortex are selectively biased in favor of highvalue stimulus features, particularly in V1, which contains a high proportion of orientation-tuned cells.…”
Section: Introductionmentioning
confidence: 99%
“…Moreover, researchers can already interpret a person's neural activity from functional magnetic resonance imaging scans at a rudimentary level 1 -that the individual is thinking of a person, say, rather than a car.…”
mentioning
confidence: 99%