2022
DOI: 10.1101/2022.03.19.484958
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Unsupervised approach to decomposing neural tuning variability

Abstract: Neural representation is often described by the tuning curves of individual neurons with respect to certain stimulus variables. Despite this tradition, it has become increasingly clear that neural tuning can vary substantially in accordance with a collection of internal and external factors. A challenge we are facing is the lack of appropriate methods to accurately capture trial-to-trial tuning variability directly from the noisy neural responses. Here we introduce an unsupervised statistical approach, Poisson… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 102 publications
0
2
0
Order By: Relevance
“…Second, as assumed by our work, neural variability is partitioned into shared and private variability. In general, the first method is used for explaining the variability of individual neurons (A. K. Churchland et al, 2011;Goris, Movshon, and E. P. Simoncelli, 2014;Zhu and Wei, 2023), whereas the second method is used for explaining variability in simultaneously recorded neuronal populations (Rabinowitz et al, 2015;Lin et al, 2015;Arandia-Romero et al, 2016). In our study, we extend the findings in the second framework in four key directions: 1) by comparing three previously proposed forms of modulation (additive, multiplicative, affine) to an unrestricted form (generalized), we provide evidence that affine models offer a parsimonious explanation for how shared variability is modulated by stimulus orientations; 2) we establish a direct link between the statistical models and a neural circuit model, offering a straightforward mechanism for the observed affine shared variability; 3) we identified an alternative form of stimulus-dependence of shared variability (generalized affine), which arises when stimulus strength (contrast) is varied; 4) we broaden the framework to explain variability shared between two connected brain areas, demonstrating that variability shared between V1 and V2 also exhibits an affine pattern across stimulus orientations.…”
Section: Novelty Of Our Workmentioning
confidence: 99%
“…Second, as assumed by our work, neural variability is partitioned into shared and private variability. In general, the first method is used for explaining the variability of individual neurons (A. K. Churchland et al, 2011;Goris, Movshon, and E. P. Simoncelli, 2014;Zhu and Wei, 2023), whereas the second method is used for explaining variability in simultaneously recorded neuronal populations (Rabinowitz et al, 2015;Lin et al, 2015;Arandia-Romero et al, 2016). In our study, we extend the findings in the second framework in four key directions: 1) by comparing three previously proposed forms of modulation (additive, multiplicative, affine) to an unrestricted form (generalized), we provide evidence that affine models offer a parsimonious explanation for how shared variability is modulated by stimulus orientations; 2) we establish a direct link between the statistical models and a neural circuit model, offering a straightforward mechanism for the observed affine shared variability; 3) we identified an alternative form of stimulus-dependence of shared variability (generalized affine), which arises when stimulus strength (contrast) is varied; 4) we broaden the framework to explain variability shared between two connected brain areas, demonstrating that variability shared between V1 and V2 also exhibits an affine pattern across stimulus orientations.…”
Section: Novelty Of Our Workmentioning
confidence: 99%
“…Also, varying the time window did not qualitatively change our results. We used a simple multivariate Poisson log-normal (MPLN) model (Supplementary Note 3, see also refs [53][54][55][56]. to estimate the trial-by-trial variability of population firing rates.…”
mentioning
confidence: 99%