2009
DOI: 10.1080/09548980902950891
|View full text |Cite
|
Sign up to set email alerts
|

Estimating linear–nonlinear models using Rényi divergences

Abstract: This paper compares a family of methods for characterizing neural feature selectivity using natural stimuli in the framework of the linear-nonlinear model. In this model, the spike probability depends in a nonlinear way on a small number of stimulus dimensions. The relevant stimulus dimensions can be found by optimizing a Rényi divergence that quantifies a change in the stimulus distribution associated with the arrival of single spikes. Generally, good reconstructions can be obtained based on optimization of R… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
45
1

Year Published

2011
2011
2017
2017

Publication Types

Select...
8

Relationship

2
6

Authors

Journals

citations
Cited by 28 publications
(47 citation statements)
references
References 37 publications
1
45
1
Order By: Relevance
“…The LL x is in units of bits per spike, and is measured relative to the “null” model, which assumes a mean firing rate with no stimulus-tuned elements, making the LL x comparable to the model’s single-spike information (Brenner et al, 2000; Kouh and Sharpee, 2009) (see Methods). For the neuron considered above, the GN model gives a 94% improvement in the LL x , with LL x (LN) = 1.64 bits/spk and LL x (GN) = 3.18 bits/spk.…”
Section: Resultsmentioning
confidence: 99%
“…The LL x is in units of bits per spike, and is measured relative to the “null” model, which assumes a mean firing rate with no stimulus-tuned elements, making the LL x comparable to the model’s single-spike information (Brenner et al, 2000; Kouh and Sharpee, 2009) (see Methods). For the neuron considered above, the GN model gives a 94% improvement in the LL x , with LL x (LN) = 1.64 bits/spk and LL x (GN) = 3.18 bits/spk.…”
Section: Resultsmentioning
confidence: 99%
“…4A). Mutual information is a measure that is proportional to the log likelihood of the LN model (38). It should be noted that these percentages relate to the amount of "explainable variance."…”
Section: Resultsmentioning
confidence: 99%
“…The values of LL reported are adjusted by a baseline LL, defined by the LL of a model that predicted a stimulus-independent mean firing. As a result, the LL is larger to the degree that it achieves a better explanation of the data than this null model, and it is bounded above by the single-spike information (Kouh and Sharpee 2009). It is reported in units of bits per spike.…”
Section: Methodsmentioning
confidence: 98%