2021
DOI: 10.1371/journal.pcbi.1009028
|View full text |Cite
|
Sign up to set email alerts
|

Learning divisive normalization in primary visual cortex

Abstract: Divisive normalization (DN) is a prominent computational building block in the brain that has been proposed as a canonical cortical operation. Numerous experimental studies have verified its importance for capturing nonlinear neural response properties to simple, artificial stimuli, and computational studies suggest that DN is also an important component for processing natural stimuli. However, we lack quantitative models of DN that are directly informed by measurements of spiking responses in the brain and ap… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
17
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 26 publications
(17 citation statements)
references
References 55 publications
0
17
0
Order By: Relevance
“…These two approaches (goal-driven and measurement-driven deep models) have been thoroughly compared in V1 and were found to be superior to linear filter-banks and simple linear–nonlinear models ( Cadena et al., 2019 ). However, more recently, the same team has shown that linear–nonlinear models with general divisive normalization make a significant step towards the performance of state-of-the-art CNN with interpretable parameters ( Burg et al., 2021 ).…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…These two approaches (goal-driven and measurement-driven deep models) have been thoroughly compared in V1 and were found to be superior to linear filter-banks and simple linear–nonlinear models ( Cadena et al., 2019 ). However, more recently, the same team has shown that linear–nonlinear models with general divisive normalization make a significant step towards the performance of state-of-the-art CNN with interpretable parameters ( Burg et al., 2021 ).…”
Section: Discussionmentioning
confidence: 99%
“…Following the move from conventional CNNs in Cadena et al. (2019) to more realistic divisive normalization models in Burg et al. (2021) , we think that future goal-driven derivations of low-level visual psychophysics (e.g., pattern masking or perceptual distortion) should include more realistic architectures too, as opposed to conventional CNNs (although they may be flexible enough to fulfill the goal).…”
Section: Discussionmentioning
confidence: 99%
“…We believe that a fruitful avenue for gaining insight into the brain's functional processing is to iteratively combine both approaches: developing new models to push the state-of-the-art predictive performance and at the same time extracting knowledge by simplifying complex models or by analyzing models post-hoc. For example, Burg et al (2021) simplified the state-of-the-art model by Cadena et al (2019) showing that divisive normalization accounts for most but not all of its performance; and Ustyuzhaninov et al ( 2022) simplified and analyzed the representations learned by a high-performing complex model revealing a combinatorial code of non-linear computations in mouse V1. Additionally, high performing predictive models may also benefit computational neuroscientists by serving as digital twins, creating an in silico environment in which hypotheses may be developed and refined before returning to the in vivo system for validation (Bashivan et al, 2019;Franke et al, 2021;Ponce et al, 2019;Walker et al, 2019).…”
Section: Discussionmentioning
confidence: 99%
“…The work on predictive models of neural responses to visual inputs has a long history that includes simple linearnonlinear (LN) models (Heeger, 1992a,b;Jones & Palmer, 1987), energy models (Adelson & Bergen, 1985), more general subunit/LN-LN models (Rust et al, 2005;Schwartz et al, 2006;Touryan et al, 2005;Vintch et al, 2015), and multi-layer neural network models (Lau et al, 2002;Lehky et al, 1992;Prenger et al, 2004;Zipser & Andersen, 1988). The deep learning revolution set new standards in prediction performance by leveraging task-optimized deep convolutional neural networks (CNNs) (Cadena et al, 2019;Cadieu et al, 2014;Yamins et al, 2014) and CNN-based architectures incorporating a shared encoding learned end-toend for thousands of neurons (Antolík et al, 2016;Bashiri et al, 2021;Batty et al, 2016;Burg et al, 2021;Cadena et al, 2019;Cowley & Pillow, 2020;Ecker et al, 2018;Franke et al, 2021;Kindel et al, 2017;Klindt et al, 2017;Lurz et al, 2020;McIntosh et al, 2016;Sinz et al, 2018;Walker et al, 2019;Zhang et al, 2018).…”
Section: Introductionmentioning
confidence: 99%
“…Regarding the nature of γ, the above example points out that the consideration of spatial neighborhoods is convenient to get the local contrast equalization along the visual field required to overcome shadow, scattering or fog. The recent literature that exploits automatic differentiation shows a range of kernel structures: some do not consider spatial interactions (either in a dense [11,16,24] or convolutional [20] combinations of features); while others do with some restrictions (either uniform weights [29], a ring of locations [18,19], or special symmetries in the space [30]). Following biology [3,8,21,28] and the intuition pointed out in the previous section.…”
Section: Models and Experimentsmentioning
confidence: 99%