2013
DOI: 10.15398/jlm.v1i1.60
|View full text |Cite
|
Sign up to set email alerts
|

Text: now in 2D! A framework for lexical expansion with contextual similarity

Abstract: Keywords: distributional semantics, lexical expansion, contextual similarity, lexical substitution, computational semantics [ 58 ] [ 59 ] [ 61 ]

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

3
74
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
5
3

Relationship

2
6

Authors

Journals

citations
Cited by 83 publications
(77 citation statements)
references
References 46 publications
3
74
0
Order By: Relevance
“…Alternative approaches are e.g., graph-based algorithms (Biemann and Riedl, 2013) or ranking functions from information retrieval (Claveau et al, 2014).…”
Section: Distributional Semanticsmentioning
confidence: 99%
“…Alternative approaches are e.g., graph-based algorithms (Biemann and Riedl, 2013) or ranking functions from information retrieval (Claveau et al, 2014).…”
Section: Distributional Semanticsmentioning
confidence: 99%
“…To this end, we use the linked disambiguated distributional KBs from 1 , which are built in three steps: 1) Learning a JoBimText model. Initially, a sense inventory is created from a large text collection using the pipeline of the JoBimText project (Biemann and Riedl, 2013). 2 The resulting structure contains disambiguated protoconcepts (i.e., senses), their similar and related terms, as well as aggregated contextual clues per proto-concept.…”
Section: Resources Usedmentioning
confidence: 99%
“…1) Learning a JoBimText model: initially, we automatically create a sense inventory from a large text collection using the pipeline of the JoBimText project [2,22] 1 . The resulting structure contains disambiguated proto-concepts (i.e.…”
Section: Building a Hybrid Aligned Resourcementioning
confidence: 99%
“…Following [2], we apply a holing operation where each observation in the text is split into a term and its context. The 1000 most significant contexts per term, as determined by the LMI significance measure [8], serve as a representation for the term, and term similarity is defined as the number of common contexts.…”
Section: Learning a Jobimtext Modelmentioning
confidence: 99%