2016
DOI: 10.1111/tops.12211
|View full text |Cite
|
Sign up to set email alerts
|

The Latent Structure of Dictionaries

Abstract: How many words—and which ones—are sufficient to define all other words? When dictionaries are analyzed as directed graphs with links from defining words to defined words, they reveal a latent structure. Recursively removing all words that are reachable by definition but that do not define any further words reduces the dictionary to a Kernel of about 10% of its size. This is still not the smallest number of words that can define all the rest. About 75% of the Kernel turns out to be its Core, a “Strongly Connect… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
15
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 29 publications
(18 citation statements)
references
References 63 publications
2
15
0
Order By: Relevance
“…We mimic associative learning between word forms used to speak about objects and their referent objects present in the environment as well as between action words and the performance of their semantically-related actions, as it is well-documented in the literature on language learning 37,64 . Although other forms of semantic learning (e.g., from texts or by definition) also play a role in meaning acquisition, we focus on the direct semantic grounding of words in object and action knowledge, because it is both prominent in early language learning and a precondition for other forms of semantic learning 65,66 . In the sighted model simulations, object-and action-related word acquisition was grounded in sensorimotor information presented to the primary areas of the model: object-related word learning was driven by perisylvian activity in *A1 and *M1 i and concordant visual (*V1) activity patterns; similarly, action-related word learning was driven by semantic activity in the lateral motor area (*M1 L ) along with perisylvian activity (Fig.…”
Section: Word Learning Resultsmentioning
confidence: 99%
“…We mimic associative learning between word forms used to speak about objects and their referent objects present in the environment as well as between action words and the performance of their semantically-related actions, as it is well-documented in the literature on language learning 37,64 . Although other forms of semantic learning (e.g., from texts or by definition) also play a role in meaning acquisition, we focus on the direct semantic grounding of words in object and action knowledge, because it is both prominent in early language learning and a precondition for other forms of semantic learning 65,66 . In the sighted model simulations, object-and action-related word acquisition was grounded in sensorimotor information presented to the primary areas of the model: object-related word learning was driven by perisylvian activity in *A1 and *M1 i and concordant visual (*V1) activity patterns; similarly, action-related word learning was driven by semantic activity in the lateral motor area (*M1 L ) along with perisylvian activity (Fig.…”
Section: Word Learning Resultsmentioning
confidence: 99%
“…In addition, word2vec learns the semantic space from large corpora in a data-driven manner 21 . This is different from defining the semantic space based on keywords that are hand selected 22 , frequently used 1 , minimally grounded 41 , or neurobiologically relevant 23,42 . Although those word models are seemingly more intuitive, they are arguably subjective and may not be able to describe the complete semantic space.…”
Section: Discussionmentioning
confidence: 99%
“…As a fifth example, Vincent‐Lamarre et al. () analyze information about connections between words and the minimal networks of words sufficient to define all other words, as revealed by large corpora analyses of dictionaries.…”
Section: Nods Experiments and Modelsmentioning
confidence: 99%
“…() and Vincent‐Lamarre et al. () also use NODS to distinguish between alternative computational accounts of cognition.…”
Section: Nods Experiments and Modelsmentioning
confidence: 99%