2016
DOI: 10.1162/tacl_a_00080
|View full text |Cite
|
Sign up to set email alerts
|

Learning to Understand Phrases by Embedding the Dictionary

Abstract: Distributional models that learn rich semantic word representations are a success story of recent NLP research. However, developing models that learn useful representations of phrases and sentences has proved far harder. We propose using the definitions found in everyday dictionaries as a means of bridging this gap between lexical and phrasal semantics. Neural language embedding models can be effectively trained to map dictionary definitions (phrases) to (lexical) representations of the words defined by those … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
191
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 124 publications
(193 citation statements)
references
References 16 publications
2
191
0
Order By: Relevance
“…However, these methods generally ignore ontology's structure. More recent work has viewed the problem of text-to-entity mapping as a projection of a textual definition to a single point in a KG (Kartsaklis et al, 2018;Hill et al, 2015). However, despite potential advantages, such as being more interpretable and less brittle (model predicts multiple related entities instead of one), path-based approaches have received relatively little attention.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…However, these methods generally ignore ontology's structure. More recent work has viewed the problem of text-to-entity mapping as a projection of a textual definition to a single point in a KG (Kartsaklis et al, 2018;Hill et al, 2015). However, despite potential advantages, such as being more interpretable and less brittle (model predicts multiple related entities instead of one), path-based approaches have received relatively little attention.…”
Section: Related Workmentioning
confidence: 99%
“…Text-to-entity mapping is the task of associating a text with a concept in a knowledge graph (KG) or an ontology (we use two terms, interchangeably). Recent works (Kartsaklis et al, 2018;Hill et al, 2015) use neural networks to project a text to a vector space where the entities of a KG are represented as continuous vectors. Despite being successful, these models have two main disadvantages.…”
Section: Introductionmentioning
confidence: 99%
“…the size of something as given by the distance around it → circumference More details about the dataset are given in Section 4.1. Hill et al (2016) presented a neural network approach for this task and also set it in the wider context of sequence embeddings. Each instance consists of a description, i.e.…”
Section: Reverse Dictionariesmentioning
confidence: 99%
“…However, our deciphering task is not the same as machine translation in that hate symbols are short and cannot be modeled as language. Our task is more closely related to (Hill et al, 2016) and (Noraset et al, 2017). Hill et al (2016) propose using neural language embedding models to map the dictionary definitions to the word representations, which is the inverse of our task.…”
Section: Machine Translationmentioning
confidence: 99%
“…Our task is more closely related to (Hill et al, 2016) and (Noraset et al, 2017). Hill et al (2016) propose using neural language embedding models to map the dictionary definitions to the word representations, which is the inverse of our task. Noraset et al (2017) propose the definition modeling task.…”
Section: Machine Translationmentioning
confidence: 99%