1996
DOI: 10.3758/bf03204766
|View full text |Cite
|
Sign up to set email alerts
|

Producing high-dimensional semantic spaces from lexical co-occurrence

Abstract: A procedure that processes a corpus of text and produces numeric vectors containing information about its meanings for each word is presented. This procedure is applied to a large corpus of natural language text taken from Usenet, and the resulting vectors are examined to determine what information is contained within them. These vectors provide the coordinates in a high-dimensional space in which word relationships can be analyzed. Analyses of both vector similarity and multidimensional scaling demonstrate th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

9
1,216
1
13

Year Published

2006
2006
2018
2018

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 1,411 publications
(1,263 citation statements)
references
References 9 publications
9
1,216
1
13
Order By: Relevance
“…Similarities based on the internal representation of objects derived from fMRI data can be compared with internal representations derived from empirically obtained judgment data or other models of semantic space, for example those based on feature norming studies [Cree and McRae, 2003;McRae et al, 2005], or lexical co-occurrence models [Andrews et al, 2009;Church and Hanks, 1990;Landauer and Dumais, 1997;Lund and Burgess, 1996] using representational similarity analysis [Kriegeskorte et al, 2008].…”
Section: Conclusion and Discussionmentioning
confidence: 99%
“…Similarities based on the internal representation of objects derived from fMRI data can be compared with internal representations derived from empirically obtained judgment data or other models of semantic space, for example those based on feature norming studies [Cree and McRae, 2003;McRae et al, 2005], or lexical co-occurrence models [Andrews et al, 2009;Church and Hanks, 1990;Landauer and Dumais, 1997;Lund and Burgess, 1996] using representational similarity analysis [Kriegeskorte et al, 2008].…”
Section: Conclusion and Discussionmentioning
confidence: 99%
“…In cognitive science, remarkable progress has been made in the last 15 years towards computational models that efficiently extract semantic representations by observing statistical regularities of word co-occurrence in text corpora (Landauer & Dumais, 1997;Lund & Burgess, 1996). The models create a high-dimensional vector space in which the semantic relationship between words can be computed (e.g., "democrat" and "good").…”
Section: Estimating the Semantic Space With Beaglementioning
confidence: 99%
“…Quantitative text analysis tools are widely used in cognitive science to explore associations among semantic concepts (Jones & Mewhort, 2007;Landauer & Dumais, 1997;Lund & Burgess, 1996). Models of semantic information have enabled researchers to readily determine associations among words in text, based on co-occurrence frequencies.…”
Section: Introductionmentioning
confidence: 99%
“…The dimension NSN can be defined according to the language-based model: words that co-occur in large corpora of text in similar contexts cluster together and are thus considered semantic neighbors (Burgess & Lund, 2000;Landauer & Dumais, 1997;Lund & Burgess, 1996).…”
Section: Multidimensionality Of Semantic Richnessmentioning
confidence: 99%