2019
DOI: 10.1126/science.aav9436
|View full text |Cite
|
Sign up to set email alerts
|

Neural population control via deep image synthesis

Abstract: Particular deep artificial neural networks (ANNs) are today’s most accurate models of the primate brain’s ventral visual stream. Using an ANN-driven image synthesis method, we found that luminous power patterns (i.e., images) can be applied to primate retinae to predictably push the spiking activity of targeted V4 neural sites beyond naturally occurring levels. This method, although not yet perfect, achieves unprecedented independent control of the activity state of entire populations of V4 neural sites, even … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

16
244
3

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
4
2

Relationship

0
10

Authors

Journals

citations
Cited by 331 publications
(312 citation statements)
references
References 34 publications
(54 reference statements)
16
244
3
Order By: Relevance
“…Recently, a family of computational models has emerged in the form of convolutional deep neural networks (DNNs) that allow to simulate this hierarchical information processing. When trained on object recognition, DNNs show interesting commonalities with the primate ventral stream, with a progression of representations that is surprisingly similar to what is seen in monkeys and humans for brief stimulus presentations (Cadieu et al, 2014; Yamins et al, 2014; Güçlü and van Gerven, 2015; Kalfas et al, 2017, 2018; Pospisil et al, 2018; Bashivan et al, 2019), as such capturing important aspects of object recognition and perceived shape similarity (Yamins et al, 2014; Kubilius et al, 2016; Kalfas et al, 2018). The architecture of these computational models is composed of series of convolutional layers that perform local filtering operations, followed by fully connected layers, which gradually transforms pixel level inputs into a high-level representational space where object categories are linearly separable.…”
Section: Introductionmentioning
confidence: 77%
“…Recently, a family of computational models has emerged in the form of convolutional deep neural networks (DNNs) that allow to simulate this hierarchical information processing. When trained on object recognition, DNNs show interesting commonalities with the primate ventral stream, with a progression of representations that is surprisingly similar to what is seen in monkeys and humans for brief stimulus presentations (Cadieu et al, 2014; Yamins et al, 2014; Güçlü and van Gerven, 2015; Kalfas et al, 2017, 2018; Pospisil et al, 2018; Bashivan et al, 2019), as such capturing important aspects of object recognition and perceived shape similarity (Yamins et al, 2014; Kubilius et al, 2016; Kalfas et al, 2018). The architecture of these computational models is composed of series of convolutional layers that perform local filtering operations, followed by fully connected layers, which gradually transforms pixel level inputs into a high-level representational space where object categories are linearly separable.…”
Section: Introductionmentioning
confidence: 77%
“…In order to establish a more causal relationship between visual areas and behavior, it will be important to combine behavioral performance with neural activity manipulation. Neural networks models and the inception loop methodology will enable the characterization of the specific features that drive neurons in these different visual areas (Bashivan et al, 2019;Ponce et al, 2019;Walker et al, 2019) .…”
mentioning
confidence: 99%
“…These results show that XDream can efficiently create images that trigger high activations in a target unit without making assumptions about the type of images a unit may prefer and without any knowledge of the target model architecture or connectivity, suggesting that XDream may well be applicable to biological neurons. Furthermore, XDream generalizes across layers in a ConvNet, while different layers roughly correspond to areas along the ventral visual stream [17,32,33], suggesting that XDream may also generalize to several ventral stream areas. Consistent with this observation, results from [13] indicated that XDream can find optimized stimuli for V1 as well as inferior temporal cortex (IT) neurons.…”
Section: Plos Computational Biologymentioning
confidence: 99%