2022
DOI: 10.1101/2022.02.22.481380
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Associative memory of structured knowledge

Abstract: A long standing challenge in biological and artificial intelligence is to understand how new knowledge can be constructed from known building blocks in a way that is amenable for computation by neuronal circuits. Here we focus on the task of storage and recall of structured knowledge in long-term memory. Specifically, we ask how recurrent neuronal networks can store and retrieve multiple knowledge structures. We model each structure as a set of binary relations between events and attributes (attributes may rep… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 72 publications
(128 reference statements)
0
4
0
Order By: Relevance
“…Although the idea that the superposition vector shall be used as a working memory while the Sparse Distributed Memory is suitable to implement the long-term memory has been expressed previously, e.g., by Emruli et al (2015), no studies have been done to quantitatively compare these two alternatives. Also in a vein similar to this study, Steinberg and Sompolinsky (2022) examined how sets of key-value pairs represented with HD computing can be stored in associative memories using a Hopfield network. In Steinberg and Sompolinsky, however, the main focus was on the aspect of using HD computing for flexibly forming fixed-length distributed representations.…”
Section: Discussion General Discussionmentioning
confidence: 99%
“…Although the idea that the superposition vector shall be used as a working memory while the Sparse Distributed Memory is suitable to implement the long-term memory has been expressed previously, e.g., by Emruli et al (2015), no studies have been done to quantitatively compare these two alternatives. Also in a vein similar to this study, Steinberg and Sompolinsky (2022) examined how sets of key-value pairs represented with HD computing can be stored in associative memories using a Hopfield network. In Steinberg and Sompolinsky, however, the main focus was on the aspect of using HD computing for flexibly forming fixed-length distributed representations.…”
Section: Discussion General Discussionmentioning
confidence: 99%
“…However, we can approximately estimate P correct under N 1 for any quadratic binding methods satisfying Eq. 68 in a similar manner to previous works (Murdock, 1982;Plate, 1995;Steinberg and Sompolinsky, 2022). We first normalize variables…”
Section: A4 Decoding With a Dictionarymentioning
confidence: 99%
“…On the other hand, the tensor product representation is more accurate, but it requires N c = N 2 neurons for representing a composition (see Appendix C.1 and C.2 for the details of the two binding methods). Though their properties have been studied previously (Plate, 1997;Schlegel et al, 2020;Steinberg and Sompolinsky, 2022), it remains elusive if HRR and the tensor product representation are the optimal binding under N c = N and N c = N 2 respectively. Moreover, little is known on how we should construct a binding operator under various composition sizes N c and how the minimum achievable error scales with the number of bound pairs L. Below, we address these questions under a quadratic parameterization of the binding operators.…”
Section: Introductionmentioning
confidence: 99%
“…The same applies to the use of such memory for codevector representations of sequences and structures (but see [16]). These topics are a promising direction for further research [14,16,23].…”
Section: The Associative-projective Neural Networkmentioning
confidence: 99%