2019
DOI: 10.31234/osf.io/ryvg2
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Holographic Declarative Memory: Distributional semantics as the architecture of memory

Abstract: We demonstrate that the key components of cognitive architectures—declarative and procedural memory—and their key capabilities—learning, memory retrieval, judgement, and decision-making—can be implemented as algebraic operations on vectors in a high-dimensional space. High-dimensional vector spaces underlie the success of modern machine learning techniques based on neural networks and deep learning. However, while neural networks have an impressive ability to process data to find patterns, they do not typicall… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
2
1

Relationship

2
5

Authors

Journals

citations
Cited by 10 publications
(14 citation statements)
references
References 50 publications
0
14
0
Order By: Relevance
“…While BEAGLE is a model of the mental lexicon, Dynamically Structured Holographic Memory (Rutledge-Taylor et al, 2014) is a variant of BEA-GLE applied to non-linguistic memory and learning tasks, such as learning sequences of actions for strategic game play. Kelly et al (2015) and Kelly and Reitter (2017) propose another BEAGLE variant, Holographic…”
Section: Applications Of Beaglementioning
confidence: 99%
“…While BEAGLE is a model of the mental lexicon, Dynamically Structured Holographic Memory (Rutledge-Taylor et al, 2014) is a variant of BEA-GLE applied to non-linguistic memory and learning tasks, such as learning sequences of actions for strategic game play. Kelly et al (2015) and Kelly and Reitter (2017) propose another BEAGLE variant, Holographic…”
Section: Applications Of Beaglementioning
confidence: 99%
“…Thus modelling language acquisition forces us to adopt a cognitive architecture that uses a lossy memory model capable of generalization. Possible candidates include vector symbolic memory, which can replicate the behaviour of the ACT-R declarative memory but is scalable to language learning [19], and neural network models.…”
Section: Language Acquisitionmentioning
confidence: 99%
“…The choice between high-dimensional neural models of memory versus ACT-R leaves us with a trade-off between (in the case of neural models) a black box that fits large-scale data better and (in the case of ACT-R) transparent representations that can be interpreted by the scientist but account for large-scale language data less well. Models that use vector-symbolic architectures [12], such as BEAGLE [17,20], provide a middle ground of systems that are more interpretable than conventional neural networks and more scalable the traditional symbolic models like ACT-R [19].…”
Section: Language Productionmentioning
confidence: 99%
“…We propose a cognitive architecture that is built on two biologically plausible, neural models, namely a variant of predictive processing known as Neural Generative Coding (NGC) [27] and holographic memory [13]. Desirably, the use of these particular building blocks yields naturally scalable, local update rules (based on variants of Hebbian learning [11]) to adjust the overall system's synaptic weight parameters while facilitating robustness in acquiring, storing, and composing distributed representations of tasks encountered sequentially.…”
Section: Introductionmentioning
confidence: 99%