2011
DOI: 10.1007/978-3-642-21735-7_38
|View full text |Cite
|
Sign up to set email alerts
|

OrBEAGLE: Integrating Orthography into a Holographic Model of the Lexicon

Abstract: Abstract. Many measures of human verbal behavior deal primarily with semantics (e.g., associative priming, semantic priming). Other measures are tied more closely to orthography (e.g., lexical decision time, visual word-form priming). Semantics and orthography are thus often studied and modeled separately. However, given that concepts must be built upon a foundation of percepts, it seems desirable that models of the human lexicon should mirror this structure. Using a holographic, distributed representation of … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2014
2014
2022
2022

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 8 publications
(5 citation statements)
references
References 15 publications
0
5
0
Order By: Relevance
“…Recent work in this area using the WINDSORS global co-occurrence definition of SND ( Macdonald, 2013 ; Danguecan and Buchanan, 2014, unpublished) found support for the idea that words with many near neighbors are processed more slowly than words with few near neighbors in both lexical decision and semantic categorization tasks. Although the present study uses the WINDSORS model to study semantic neighborhood effects ( Durda and Buchanan, 2008 ), other distributional models such as Hyperspace Analog to Language (HAL; Lund and Burgess, 1996 ), Correlated Occurrence Analog to Lexical Semantics (COALS; Rohde et al, 2004 ), Latent Semantic Analysis (LSA; Landauer and Dumais, 1997 ), Bound Encoding of the AGgregate Language Environment (BEAGLE; Jones and Mewhort, 2007 ), OrBEAGLE ( Kachergis et al, 2011 ), Random Permutation Model ( Sahlgren et al, 2008 ), the Topic model ( Griffiths et al, 2007 ), and HiDEx ( Shaoul and Westbury, 2010 ) have also contributed extensively to our knowledge of semantic phenomena.…”
Section: Introductionmentioning
confidence: 99%
“…Recent work in this area using the WINDSORS global co-occurrence definition of SND ( Macdonald, 2013 ; Danguecan and Buchanan, 2014, unpublished) found support for the idea that words with many near neighbors are processed more slowly than words with few near neighbors in both lexical decision and semantic categorization tasks. Although the present study uses the WINDSORS model to study semantic neighborhood effects ( Durda and Buchanan, 2008 ), other distributional models such as Hyperspace Analog to Language (HAL; Lund and Burgess, 1996 ), Correlated Occurrence Analog to Lexical Semantics (COALS; Rohde et al, 2004 ), Latent Semantic Analysis (LSA; Landauer and Dumais, 1997 ), Bound Encoding of the AGgregate Language Environment (BEAGLE; Jones and Mewhort, 2007 ), OrBEAGLE ( Kachergis et al, 2011 ), Random Permutation Model ( Sahlgren et al, 2008 ), the Topic model ( Griffiths et al, 2007 ), and HiDEx ( Shaoul and Westbury, 2010 ) have also contributed extensively to our knowledge of semantic phenomena.…”
Section: Introductionmentioning
confidence: 99%
“…For example, while it is obvious to the human reader that the terms “depressive disorder” and “major depressive disorder” are related to one another, the vector representations of the UMLS concepts corresponding to these terms will be similar only to the extent that they occur in similar contexts in SemMedDB. Methods exist to encode such orthographic information into distributed representations [77], including our own [78]. However, the utility this additional information for semantic and predictive modeling remains to be determined.…”
Section: Resultsmentioning
confidence: 99%
“…Previous attempts to model orthographic similarity in vector space depended upon using a binding operator to generate near-orthogonal vector representations of sequences of characters within a word, including gapped sequences to allow for flexibility (Cox et al, 2011; Kachergis et al, 2011; Hannagan et al, 2011). However, encoding in this way requires the generation of a large number of vector products.…”
Section: Applications Of Vsas and Psimentioning
confidence: 99%