2019
DOI: 10.31234/osf.io/ytnjp
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Indirect associations in learning semantic and syntactic lexical relationships

Abstract: Computational models of distributional semantics represent word meanings in terms of words' relationships with all other words in a corpus. Although distributional models are sensitive to topic (e.g., tiger and stripes) and synonymy (e.g., soar and fly), the models have limited sensitivity to part-of-speech (e.g., book and shirt are nouns). How lexical-syntactic knowledge is encoded and how it meshes with semantic representations are open questions. Word co-occurrence relationships define a connected graph suc… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
22
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
3

Relationship

5
2

Authors

Journals

citations
Cited by 10 publications
(22 citation statements)
references
References 39 publications
0
22
0
Order By: Relevance
“…Syntactic knowledge has rich stochastic ties to semantic representations, and can be seen as configural constraints that display a mix of regularities and exceptions. Distributed representations may well be a neurologically and psychologically plausible framework for syntactic knowledge, and it is a technically realistic candidate (Kelly et al, 2013 , 2017 ). At lexical, syntactic, and morphological levels, the overlap in semantic space and joint compressibility of lexicons associated with two languages determine their mutual facilitation.…”
Section: Model: Core Componentsmentioning
confidence: 99%
“…Syntactic knowledge has rich stochastic ties to semantic representations, and can be seen as configural constraints that display a mix of regularities and exceptions. Distributed representations may well be a neurologically and psychologically plausible framework for syntactic knowledge, and it is a technically realistic candidate (Kelly et al, 2013 , 2017 ). At lexical, syntactic, and morphological levels, the overlap in semantic space and joint compressibility of lexicons associated with two languages determine their mutual facilitation.…”
Section: Model: Core Componentsmentioning
confidence: 99%
“…The theoretical framework we propose allows for memory to be modelled at multiple levels of analysis, from the level of individual, biological neurons and messages passed between groups of those neurons (many-to-one models), to the level of the events processed by memory (one-to-one models), to the level of the concepts which emerge as aggregates across those events (many-to-many models), and on up, to the arbitrarily abstract concepts that emerge from aggregating across concepts (Kelly, Reitter, and West 2017). Furthermore, estimates of Bayesian probability arise from the vector algebra of many-to-many models (Kelly, Kwok, and West 2015).…”
Section: Resultsmentioning
confidence: 99%
“…Vectors representing items are typically randomly generated by the models, but ought to be generated according to semantic (Kelly, Reitter, and West 2017) or perceptual (Cox et al 2011;Kelly, Blostein, and Mewhort 2013) features, which would require integrating memory and perception.…”
Section: Choice Of Representation Schemementioning
confidence: 99%
“…In English, word order conveys much of the meaning of the sentence, and is critical in constructing a grammatical sentence. To account for this, we include in our analysis the Hierarchichal Holographic Model (HHM; Kelly et al, 2017), a model sensitive to the order of words in a sentence. HHM generates multiple levels of representations, such that higher levels are sensitive to more abstract relationships between words, such as partof-speech relationships (Kelly et al, 2017).…”
Section: Experiments Acceptability As Semantic Coherencementioning
confidence: 99%
“…To account for this, we include in our analysis the Hierarchichal Holographic Model (HHM; Kelly et al, 2017), a model sensitive to the order of words in a sentence. HHM generates multiple levels of representations, such that higher levels are sensitive to more abstract relationships between words, such as partof-speech relationships (Kelly et al, 2017). We trained three levels of HHM representations with 1024 dimensions and a context window of 5 words to the left and right of each target word.…”
Section: Experiments Acceptability As Semantic Coherencementioning
confidence: 99%