2022
DOI: 10.1007/s00521-021-06759-0
|View full text |Cite
|
Sign up to set email alerts
|

“What-Where” sparse distributed invariant representations of visual patterns

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
3
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
1

Relationship

3
1

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 21 publications
0
3
0
Order By: Relevance
“…This capacity is only valid for sparse equally distributed ones [ 18 ]. The description of how to generate efficiently binary sparse codes of visual patterns or other data structures is described in [ 39 , 40 , 41 ]. For example, real vector patterns have to be binarized.…”
Section: Lernmatrixmentioning
confidence: 99%
“…This capacity is only valid for sparse equally distributed ones [ 18 ]. The description of how to generate efficiently binary sparse codes of visual patterns or other data structures is described in [ 39 , 40 , 41 ]. For example, real vector patterns have to be binarized.…”
Section: Lernmatrixmentioning
confidence: 99%
“…In previous work (Sa-Couto and Wichert 2019 , 2022 ), using visual cortex based principles we proposed the “What-Where” encoder. This network can be trained with Hebbian rules without using labels, and the embeddings it generates have been shown to work extremely well for both classification and associative memory tasks (Sa-Couto and Wichert 2020 , 2021 ).…”
Section: Introductionmentioning
confidence: 99%
“…Willshaw's model of Associative memory is a likely candidate for a computational model of this brain function, but its application on real-world data is hindered by the so-called Sparse Coding Problem. Due to a recently proposed sparse encoding prescription [31], which maps visual patterns into binary feature maps, we were able to analyze the behavior of the Willshaw Network (WN) on real-world data and gain key insights into the strengths of the model. To further enhance the capabilities of the WN, we propose the Multiple-Modality architecture.…”
mentioning
confidence: 99%