2018
DOI: 10.1162/neco_a_01084
|View full text |Cite
|
Sign up to set email alerts
|

A Theory of Sequence Indexing and Working Memory in Recurrent Neural Networks

Abstract: To accommodate structured approaches of neural computation, we propose a class of recurrent neural networks for indexing and storing sequences of symbols or analog data vectors. These networks with randomized input weights and orthogonal recurrent weights implement coding principles previously described in vector symbolic architectures (VSA) and leverage properties of reservoir computing. In general, the storage in reservoir computing is lossy, and crosstalk noise limits the retrieval accuracy and information … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
76
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
3
2

Relationship

2
8

Authors

Journals

citations
Cited by 83 publications
(90 citation statements)
references
References 44 publications
1
76
0
Order By: Relevance
“…They serve as seed HD vectors, and they are used to make representations for more complex objects. To generate seed HD vectors, we use bipolar dense codes of equally probable +1s and −1s, i.e., {−1, +1} d where d = 10, 000; this dimensionality works particularly well for our applications, but it is essentially a hyperparameter that can be tuned [42]. In the following, we describe similarity measure and arithmetic operations using this code.…”
Section: Background In Hd Computingmentioning
confidence: 99%
“…They serve as seed HD vectors, and they are used to make representations for more complex objects. To generate seed HD vectors, we use bipolar dense codes of equally probable +1s and −1s, i.e., {−1, +1} d where d = 10, 000; this dimensionality works particularly well for our applications, but it is essentially a hyperparameter that can be tuned [42]. In the following, we describe similarity measure and arithmetic operations using this code.…”
Section: Background In Hd Computingmentioning
confidence: 99%
“…For a long time it was believed that the main drawback of distributed representations is the inability to represent structure. Recently, however, DRs have been developed for complexly structured representations of objects, such as sequences [60], [61], [62], [63], [64] or graphs of situations (episodes) of knowledge bases, etc., e.g. [65], [66], [7], [8], [3].…”
Section: Discussionmentioning
confidence: 99%
“…Note that distributed representations are closely related to associative memory, e.g. [49,50] as well as to human memory [51].…”
Section: Regularization Methods For Solving Dipmentioning
confidence: 99%