2018
DOI: 10.1007/s42113-018-0008-2
|View full text |Cite|
|
Sign up to set email alerts
|

An Instance Theory of Semantic Memory

Abstract: Distributional semantic models (DSMs) specify learning mechanisms with which humans construct a deep representation of word meaning from statistical regularities in language. Despite their remarkable success at fitting human semantic data, virtually all DSMs may be classified as prototype models in that they try to construct a single representation for a word's meaning aggregated across contexts. This prototype representation conflates multiple meanings and senses of words into a center of tendency, often losi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

3
85
1

Year Published

2019
2019
2022
2022

Publication Types

Select...
8
1

Relationship

4
5

Authors

Journals

citations
Cited by 55 publications
(89 citation statements)
references
References 69 publications
3
85
1
Order By: Relevance
“…The finding of word preferences in the UD data is consistent with the perspective that language is a complex adaptive system, a general perspective on language processing and the cultural evolution of language (e.g., Kirby et al, 2007; Christiansen and Chater, 2008; Beckner et al, 2009; Tomasello, 2009; see Johns and Jones, 2015; Jamieson et al, 2018 for computational models of semantics and language processing that embody this perspective). Adaptive theories of language propose that the acquisition and use of language is based in the past interactions that people have had with others in their social environment.…”
Section: Discussionsupporting
confidence: 80%
See 1 more Smart Citation
“…The finding of word preferences in the UD data is consistent with the perspective that language is a complex adaptive system, a general perspective on language processing and the cultural evolution of language (e.g., Kirby et al, 2007; Christiansen and Chater, 2008; Beckner et al, 2009; Tomasello, 2009; see Johns and Jones, 2015; Jamieson et al, 2018 for computational models of semantics and language processing that embody this perspective). Adaptive theories of language propose that the acquisition and use of language is based in the past interactions that people have had with others in their social environment.…”
Section: Discussionsupporting
confidence: 80%
“…One of the key developments in big data approaches to cognition is the emergence of distributional models of semantics, which learn the meaning of words from statistical patterns contained in very large sources of texts (see Jones et al, 2015 for a review). The original, and best known, model of this class is Latent Semantic Analysis (LSA; Landauer and Dumais, 1997), which spurred the development of many new approaches (e.g., Lund and Burgess, 1996; Griffiths et al, 2007; Jones and Mewhort, 2007; Shaoul and Westbury, 2010; Mikolov et al, 2013; Jamieson et al, 2018). The insight that these models exploit is that lexical semantic behavior seems to be systematically related to the co-occurrence of words within the natural language environment.…”
Section: Introductionmentioning
confidence: 99%
“…The results contained in this article point to the continued power of big-data analyses of behavioral data, and in this particular case, large natural-language corpora. Not only can these corpora be used to develop new models of language processing (e.g., Griffiths, Steyvers, & Tenenbaum, 2007;Jamieson, Avery, Johns, & Jones, 2018;Johns & Jones, 2015;Jones & Mewhort, 2007;Landauer & Dumais, 1997), but can also serve to examine large scale trends in human behavior (e.g., . Language is a central organizer of human cognition (Brysbaert et al, 2018;Johns, Jones, & Mewhort, 2012b;Jones et al, 2017), and a large percentage of our everyday experience consists of linguistic stimuli, with a typical human being reading millions of words per year (Brysbaert, Stevens, Mandera, & Keuleers, 2016).…”
Section: Discussionmentioning
confidence: 99%
“…However, HDM may be more appropriately understood as a model of semantic memory. A good candidate for an episode memory model is the MINERVA class of memory models (e.g., Hintzman, 1986;Jamieson, Avery, Johns, & Jones, 2018), a vector-based model of human memory that stores one vector for each memory trace (i.e., ACT-R chunk) and has strong commonalities with ACT-R DM (Dimov, 2016).…”
Section: Episodic Memorymentioning
confidence: 99%