2004
DOI: 10.1385/ni:2:3:275
|View full text |Cite
|
Sign up to set email alerts
|

Scaling Self-Organizing Maps to Model Large Cortical Networks

Abstract: Self-organizing computational models with specific intracortical connections can explain many functional features of visual cortex, such as topographic orientation and ocular dominance maps. However, due to their computational requirements, it is difficult to use such detailed models to study large-scale phenomenal like object segmentation and binding, object recognition, tilt illusions, optic flow, and fovea-periphery differences. This article introduces two techniques that make large simulations practical. F… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
19
0

Year Published

2004
2004
2022
2022

Publication Types

Select...
5
5

Relationship

1
9

Authors

Journals

citations
Cited by 24 publications
(19 citation statements)
references
References 26 publications
0
19
0
Order By: Relevance
“…On the one hand, the model neurons defined in our approach represent an ensemble of natural cortical neurons, which are spatially and functionally closely connected. Thus, each model neuron may represent, for example, a cortical column (this concept is mainly used in vision, e.g., by Obermayer et al, 1990 and Bednar et al, 2004). On the other hand, the auditory and the semantic maps are connected by simple associative links rather than connectionist activity propagation such as Hebbian learning.…”
Section: Discussionmentioning
confidence: 99%
“…On the one hand, the model neurons defined in our approach represent an ensemble of natural cortical neurons, which are spatially and functionally closely connected. Thus, each model neuron may represent, for example, a cortical column (this concept is mainly used in vision, e.g., by Obermayer et al, 1990 and Bednar et al, 2004). On the other hand, the auditory and the semantic maps are connected by simple associative links rather than connectionist activity propagation such as Hebbian learning.…”
Section: Discussionmentioning
confidence: 99%
“…Point‐to‐point accuracy in these maps is achieved through an activity‐dependent mechanism in which synapses that are effective in depolarizing postsynaptic cells are stabilized while those that are ineffective are retracted. These maps are thought to be essential for functions that include depth perception, object recognition, reconstruction of a visual scene and visually guided behaviours (DeAngelis, 2000; Bednar et al ., 2004; Navalpakkam & Itti, 2005; Weber et al ., 2005).…”
Section: Introductionmentioning
confidence: 99%
“…This is due to the fact that there are many solutions to the same problem (note, for instance, that solutions are invariant up to a permutation of neurons' addresses). It is possible to decrease these degrees of freedom by including, for instance, topological links between filters (Bednar, Kelkar, & Miikkulainen, 2004). Qualitatively, the main difference between both results is that filters produced by aSSC look more diverse and broad (so that they often overlap), while the filters produced by SparseNet are more localized and thin.…”
Section: Receptive Field Formationmentioning
confidence: 95%