2020
DOI: 10.1016/j.jml.2020.104153
|View full text |Cite
|
Sign up to set email alerts
|

Indirect associations in learning semantic and syntactic lexical relationships

Abstract: Computational models of distributional semantics (a.k.a. word embeddings) represent a word's meaning in terms of its relationships with all other words. We examine what grammatical information is encoded in distributional models and investigate the role of indirect associations. Distributional models are sensitive to associations between words at one degree of separation, such as 'tiger' and 'stripes', or two degrees of separation, such as 'soar' and 'fly'. By recursively adding higher levels of representation… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
2
2
1

Relationship

2
6

Authors

Journals

citations
Cited by 11 publications
(11 citation statements)
references
References 50 publications
0
11
0
Order By: Relevance
“…We choose MINERVA 2 since it captures a wide variety of human memory phenomena across differing settings and, as such, seems a good candidate for integration into a cognitive architecture. MINERVA 2 has been applied to a variety of experimental paradigms, including judgement of frequency and recognition [15], category learning [16], implicit learning [18,19], associative and reinforcement learning phenomena from both the animal and human learning literature [7,17], heuristics and biases in decision-making [8], hypothesis-generation [50], learning the meaning of words [20,31], and the production of grammatical sentences [22,24].…”
Section: Memorymentioning
confidence: 99%
See 1 more Smart Citation
“…We choose MINERVA 2 since it captures a wide variety of human memory phenomena across differing settings and, as such, seems a good candidate for integration into a cognitive architecture. MINERVA 2 has been applied to a variety of experimental paradigms, including judgement of frequency and recognition [15], category learning [16], implicit learning [18,19], associative and reinforcement learning phenomena from both the animal and human learning literature [7,17], heuristics and biases in decision-making [8], hypothesis-generation [50], learning the meaning of words [20,31], and the production of grammatical sentences [22,24].…”
Section: Memorymentioning
confidence: 99%
“…The continuous growth of the memory table results in a scaling problem for CogNGen, with significant slow downs even in the small maze learning tasks under consideration in this paper. Most MINERVA 2 models store only a small number of memory traces, though a few MINERVA 2 models used for language processing have stored up to 20,000 [20] to 500,000 traces [21,24]. With a persistent long-term memory store across learning the maze task, in the worst case, as many as millions of traces might be stored in CogNGen's memory.…”
Section: Adding To Memorymentioning
confidence: 99%
“…Each holographic vector represents a distinct concept, collectively serving as the basis vectors for the agent's conceptual space [20]. Our model is able accounts for human performance on a wide range of tasks, including recall, probability judgement, and decision-making [13], as well as how humans learn the meaning and part-of-speech of words from experience [14].…”
Section: Declarative Memorymentioning
confidence: 99%
“…2 The question-based encoding used by HDM allows the model to be structured around the atomic items of experience-values or concepts-rather than the experiences-chunks or episodes-themselves. The encoding technique used by HDM has proven effective as a method of modelling the semantic (Jones & Mewhort, 2007) and syntactic (Kelly, Ghafurian, West, & Reitter, 2020) knowledge stored in the mental lexion, but here we explore its utility as a general purpose scheme for declarative memory.…”
Section: Add a Chunk With Slotsmentioning
confidence: 99%
“…Model, a recursive variant of BEAGLE with multiple levels of representations, is able to learn arbitrarily abstract relationships (Kelly et al, 2020). Sensitivity to abstract relationships is useful for capturing sytactic similarity between words, for ordering words into grammatical sentences, and for being able to distinguish between grammatical and ungrammatical word orderings, even in the case of nonsensical sentences that lack semantics.…”
Section: Hierarchical Holographic Model the Hierarchical Holographicmentioning
confidence: 99%