2022
DOI: 10.3389/fncom.2022.929348
|View full text |Cite
|
Sign up to set email alerts
|

Efficient Temporal Coding in the Early Visual System: Existing Evidence and Future Directions

Abstract: While it is universally accepted that the brain makes predictions, there is little agreement about how this is accomplished and under which conditions. Accurate prediction requires neural circuits to learn and store spatiotemporal patterns observed in the natural environment, but it is not obvious how such information should be stored, or encoded. Information theory provides a mathematical formalism that can be used to measure the efficiency and utility of different coding schemes for data transfer and storage… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 14 publications
(12 citation statements)
references
References 163 publications
1
7
0
Order By: Relevance
“…Consistent with principles of efficient coding (Price & Gavornik 2022), we found that training had the effect of creating relatively sparse response patterns. Using two different measures, we estimate that the training reduced the number of visually modulated cells by 20-30%.…”
Section: Discussionsupporting
confidence: 81%
See 1 more Smart Citation
“…Consistent with principles of efficient coding (Price & Gavornik 2022), we found that training had the effect of creating relatively sparse response patterns. Using two different measures, we estimate that the training reduced the number of visually modulated cells by 20-30%.…”
Section: Discussionsupporting
confidence: 81%
“…Since information theory holds that codes gain efficiency by eliminating redundant information (see Price & Gavornik 2022 of how this relates to predictive coding in the visual system), we also examined correlation coefficients between stimuli to determine if our day 5 activity was less correlated than day 0 activity. For each sequence presentation, we calculated the Pearson correlation coefficients between all pairs of stimuli, yielding a collection of coefficients for each sequence on each day (figure 3C).…”
Section: Resultsmentioning
confidence: 99%
“…However, there is a gap in mechanistic explanations, particularly regarding how structural alterations driven by neural plasticity impact brain dynamics at the meso-and macro-scale levels. Similarly, how the specific brain dynamics associated with expertise can become more efficient [17][18][19], e. g., promoting a more expert early and late processing of visual information [20,21]. Whole-brain computational models provide a way to explore structure-function relationships [22,23] by generating brain activity based on anatomical connections and the dynamics of individual brain regions.…”
Section: Figmentioning
confidence: 99%
“…We explored this hypothesis by analyzing association maps obtained using Neurosynth [31], allowing us to generalize our results from video games to other cognitive domains. Lastly, our final hypothesis proposed that VGPs' connectomes exhibit a higher signal-to-noise ratio within the parieto-occipital loop [21,32,33], resulting in robustness to stimulation in noisy contexts because of videogame expertise [17]. We explored this hypothesis by applying in-silico external stimulation to homotopic pairs of occipital brain areas [34][35][36].…”
Section: Figmentioning
confidence: 99%
“…To achieve the ambitious objectives envisioned by cortical visual prostheses, we should be able to stimulate the occipital cortex in a way as similar as possible to the physiological response to visual stimuli, mimicking the human visual pathway ( Nirenberg and Pandarinath, 2012 ; Qiao et al, 2019 ; Brackbill et al, 2020 ; Li et al, 2022 ; Price and Gavornik, 2022 ). In this framework, we should consider that closed-loop circuits exist in just about every part of the nervous system ( Farkhondeh Tale Navi et al, 2022 ; Khodagholy et al, 2022 ).…”
Section: Introductionmentioning
confidence: 99%