2008
DOI: 10.1162/neco.2008.20.1.91
|View full text |Cite
|
Sign up to set email alerts
|

Bayesian Spiking Neurons I: Inference

Abstract: We show that the dynamics of spiking neurons can be interpreted as a form of Bayesian inference in time. Neurons that optimally integrate evidence about events in the external world exhibit properties similar to leaky integrate-and-fire neurons with spike-dependent adaptation and maximally respond to fluctuations of their input. Spikes signal the occurrence of new information-what cannot be predicted from the past activity. As a result, firing statistics are close to Poisson, albeit providing a deterministic r… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

5
254
0

Year Published

2010
2010
2022
2022

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 252 publications
(263 citation statements)
references
References 32 publications
5
254
0
Order By: Relevance
“…This process theory associates the expected probability of a state with the probability of a neuron (or population) firing and the logarithm of this probability with postsynaptic membrane potential. This fits comfortably with theoretical proposals and empirical work on the accumulation of evidence (Kira, Yang, & Shadlen, 2015) and the neuronal encoding of probabilities (Deneve, 2008), while rendering the softmax function a (sigmoid) activation function that converts membrane potentials to firing rates. The postsynaptic depolarization caused by afferent input can now be interpreted in terms of free energy gradients (i.e., state prediction errors) that are linear mixtures of firing rates in other neurons (or populations).…”
Section: Belief Updating and Belief Propagationsupporting
confidence: 81%
“…This process theory associates the expected probability of a state with the probability of a neuron (or population) firing and the logarithm of this probability with postsynaptic membrane potential. This fits comfortably with theoretical proposals and empirical work on the accumulation of evidence (Kira, Yang, & Shadlen, 2015) and the neuronal encoding of probabilities (Deneve, 2008), while rendering the softmax function a (sigmoid) activation function that converts membrane potentials to firing rates. The postsynaptic depolarization caused by afferent input can now be interpreted in terms of free energy gradients (i.e., state prediction errors) that are linear mixtures of firing rates in other neurons (or populations).…”
Section: Belief Updating and Belief Propagationsupporting
confidence: 81%
“…This makes the recognition density an approximate conditional density. This corresponds to Bayesian inference on the causes of sensory signals and provides a principled account of perception; i.e., the Bayesian brain (Helmholtz 1860(Helmholtz /1962Barlow 1969;Ballard et al 1983;Mumford 1992;Dayan et al 1995;Rao and Ballard 1998;Lee and Mumford 2003;Knill and Pouget 2004;Kersten et al 2004;Friston and Stephan 2007;Deneve 2008). Finally, it shows that free-energy is an upper bound on surprise because the divergence cannot be less than zero: Optimizing the recognition density makes the free-energy a tight bound on surprise; when the recognition and conditional densities coincide, free-energy is exactly surprise and perception is veridical.…”
Section: Free-energy Action and Perceptionmentioning
confidence: 99%
“…Much of this research however relies on noisy, stochastic spiking neurons that are characterized by a spike-density, and Bayesian inference is implicitly carried out by large populations of such neurons [9,188,141,183,133,49,17,64,95]. As noted by Deneve [38], coding probabilities with stochastic neurons "has two major drawbacks. First,[...], it adds uncertainty, and therefore noise, to an otherwise deterministic probability computation.…”
Section: Other Snn Research Tracksmentioning
confidence: 99%
“…In [38,39], an alternative approach is developed for binary log-likelihood estimation in an SNN. Such binary log-likelihood estimation in an SNN has some known limitations: It can only perform exact inference in a limited family of generative models, and in a hierarchical model, only the objects highest in the hierarchy truly have a temporal dynamic.…”
Section: Other Snn Research Tracksmentioning
confidence: 99%