2019
DOI: 10.1371/journal.pcbi.1007370
|View full text |Cite
|
Sign up to set email alerts
|

Constrained inference in sparse coding reproduces contextual effects and predicts laminar neural dynamics

Abstract: When probed with complex stimuli that extend beyond their classical receptive field, neurons in primary visual cortex display complex and non-linear response characteristics. Sparse coding models reproduce some of the observed contextual effects, but still fail to provide a satisfactory explanation in terms of realistic neural structures and cortical mechanisms, since the connection scheme they propose consists only of interactions among neurons with overlapping input fields. Here we propose an extended genera… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
3
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 78 publications
(143 reference statements)
0
3
0
Order By: Relevance
“…Garrigues and Olshausen (2008) achieve this by including a pairwise coupling term in the prior for the sparse coding model. A recent study (Capparelli et al, 2019) achieves this by explicitly including spatial dependencies among dictionary elements with non-overlapping RFs into the sparse coding framework.…”
Section: Relation To Previous Workmentioning
confidence: 99%
“…Garrigues and Olshausen (2008) achieve this by including a pairwise coupling term in the prior for the sparse coding model. A recent study (Capparelli et al, 2019) achieves this by explicitly including spatial dependencies among dictionary elements with non-overlapping RFs into the sparse coding framework.…”
Section: Relation To Previous Workmentioning
confidence: 99%
“…Sparse coding argues that the brain is optimized to represent stimuli efficiently such that only a small number of neurons are strongly activated at a given time. When trained on static images, sparse coding models have been shown to replicate the like-for-like connectivity pattern among units with similar orientation tuning 40 . Where motion has been included in these models, they have also been shown to capture the asymmetry in excitatory and inhibitory inputs for direction tuning 41 .…”
Section: Comparison To Other Normative Modelsmentioning
confidence: 99%
“…It overcomes the information loss caused by multiple convolutions in the high-level convolution network, which makes the small target information in the highlevel features be seriously missed and the detection effects on small targets are poor. Meanwhile, due to the addition of the underlying features, the bottom layer features have rich direction information, which can help the angle prediction (Capparelli et al, 2019;Togaçar et al, 2020). Based on the original VGG-16 architecture, the 3rd, 4th, and 5th layers of the VGG-16 convolution layer are fused so that the model can have good detection robustness of smaller bottle objects on the conveyor belt; meanwhile, the information on the angle prediction of the object is increased by introducing a low-level neural response, thereby reducing the error of the angle prediction.…”
Section: Improvements In the Target Classification Algorithmmentioning
confidence: 99%