2017
DOI: 10.1073/pnas.1711114115
|View full text |Cite
|
Sign up to set email alerts
|

Toward a unified theory of efficient, predictive, and sparse coding

Abstract: A central goal in theoretical neuroscience is to predict the response properties of sensory neurons from first principles. To this end, "efficient coding" posits that sensory neurons encode maximal information about their inputs given internal constraints. There exist, however, many variants of efficient coding (e.g., redundancy reduction, different formulations of predictive coding, robust coding, sparse coding, etc.), differing in their regimes of applicability, in the relevance of signals to be encoded, and… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

6
175
1

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 146 publications
(198 citation statements)
references
References 44 publications
(66 reference statements)
6
175
1
Order By: Relevance
“…More in general, since slowness has been related to predictability 27,28 , our results are also consistent with normative approaches to sensory processing that are based on temporal prediction 29 . On the other hand, our findings, by showing that exposure to the spatial structure of natural images alone is not enough to yield complex cells, reject computational accounts of invariance based on USL 11,12 , while leaving open the possibility that the latter may govern the development of shape tuning [13][14][15][16] .…”
supporting
confidence: 86%
“…More in general, since slowness has been related to predictability 27,28 , our results are also consistent with normative approaches to sensory processing that are based on temporal prediction 29 . On the other hand, our findings, by showing that exposure to the spatial structure of natural images alone is not enough to yield complex cells, reject computational accounts of invariance based on USL 11,12 , while leaving open the possibility that the latter may govern the development of shape tuning [13][14][15][16] .…”
supporting
confidence: 86%
“…This illustrates that the relationships between neural representations can themselves contain extractable information about the expected structure of the world 26 , which we quantified by examining decoding directions along which the neural state best discriminates task variables of interest. How information can be decoded must depend on how it has been encoded [27][28][29] , and others have used this to propose encoding schemes based on single-neuron stimulus tuning/filtering properties that optimizes various decoding-based criteria [28][29][30][31][32][33] . Our approach differs in considering neural-state directions to be the basic encoding unit, and consequences of how neural noise and statistical correlations between task variables modify the relationship between decoding and encoding directions.…”
Section: Introductionmentioning
confidence: 99%
“…In other words, continuous changes that occur from frame to frame in a dynamic visual input should elicit smaller changes in the neural code of deeper areas compared to peripheral areas, resulting in progressively slower dynamics. On the other hand, the temporal stability of a code has been shown to be at odds with its information theoretical efficiency in encoding the past of the stimulus, as this involves temporal decorrelation (Chalk et al 2018;Dan et al 1996). Moreover, adaptive and top-down mechanisms (e.g., prediction error signals) have been identified in visual cortex (Gilbert and Li 2013;Webster 2015) that favor the encoding of surprising or transient inputs over predictable or sustained ones (Issa et al 2018;Siegle et al 2019;Stigliani et al 2019;Vinken et al 2017).…”
Section: Discussionmentioning
confidence: 99%