2014
DOI: 10.3389/fncom.2014.00058
|View full text |Cite
|
Sign up to set email alerts
|

Population coding in mouse visual cortex: response reliability and dissociability of stimulus tuning and noise correlation

Abstract: The primary visual cortex is an excellent model system for investigating how neuronal populations encode information, because of well-documented relationships between stimulus characteristics and neuronal activation patterns. We used two-photon calcium imaging data to relate the performance of different methods for studying population coding (population vectors, template matching, and Bayesian decoding algorithms) to their underlying assumptions. We show that the variability of neuronal responses may hamper th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

11
71
0
1

Year Published

2014
2014
2019
2019

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 75 publications
(83 citation statements)
references
References 67 publications
11
71
0
1
Order By: Relevance
“…Mean Fano factor across all cells in LGN was 1.11 -1.99, depending on the stimulus and 0.66 -0.91 in V1 ( Figure 1E, Fig S3), slightly lower than has previously been reported for Ca 2+ imaging in awake mice (1.39, ref. 46 ).…”
Section: Trial-to-trial Variability In Awake Micementioning
confidence: 99%
“…Mean Fano factor across all cells in LGN was 1.11 -1.99, depending on the stimulus and 0.66 -0.91 in V1 ( Figure 1E, Fig S3), slightly lower than has previously been reported for Ca 2+ imaging in awake mice (1.39, ref. 46 ).…”
Section: Trial-to-trial Variability In Awake Micementioning
confidence: 99%
“…Furthermore, at the cellular-scale, we failed to find any systematic difference between binocular and monocular orientation selectivity in either naive or experienced animals (Supplementary Figure 3B Because three separate orientation representations emerge in the developing visual cortex prior to visual experience, we investigated if one of the representations was consistently further developed in encoding orientation-specific information at the cellular-scale. Here we employed a template matching decoder to predict the stimulus orientation presented on each trial by comparing trialevoked population activity patterns to the best matching trial-averaged response pattern ( Figure 4L) (Montijn et al, 2014). In naive animals, we could decode population responses from binocular and monocular stimulation at a rate higher than chance indicating that all three representations principally contribute to stimulus decoding ( Figure 4M, Decoding performance in naive animals: Binocular 42.9±5.8%, Contralateral 49.5±5.0%, Ipsilateral 40.2±2.7% (Mean±SEM) vs chance 12.5%).…”
Section: Binocular Stimulation Yields a Third Orientation Representatmentioning
confidence: 99%
“…To decode the stimulus orientation from the population activity vectors recorded during each trial, we used a normalized template matching algorithm (Montijn et al, 2014). In an effort to evaluate the decoding based on the relative responsiveness of each cell, the trial evoked ΔF/F0 responses of each cell were first individually z-scored: θ, n, i θ, n, i μ where i indexes the N th neuron, θ is the stimulation orientation, n is the trial number, θ, n, i is the z-scored response, θ, n, i is the actual ΔF/F0 response, µ is the mean ΔF/F0 value over the monocular or binocular recording period, and σ is the standard deviation from the mean ΔF/F0 value.…”
Section: Two-photon Imagingmentioning
confidence: 99%
“…However, sophisticated analysis techniques are required to analyse such large amounts of neurons and their interactions. One measure that lends itself to compare neural data under different conditions is the simultaneous firing configuration 11,12 of discretised and binarised individual spike times of each channel or neuron, henceforth termed patterns. These patterns may have different probabilities of occurring depending on what process or context generated them.…”
Section: Introductionmentioning
confidence: 99%