Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing 2015
DOI: 10.18653/v1/d15-1174
|View full text |Cite
|
Sign up to set email alerts
|

Representing Text for Joint Embedding of Text and Knowledge Bases

Abstract: Models that learn to represent textual and knowledge base relations in the same continuous latent space are able to perform joint inferences among the two kinds of relations and obtain high accuracy on knowledge base completion (Riedel et al., 2013). In this paper we propose a model that captures the compositional structure of textual relations, and jointly optimizes entity, knowledge base, and textual relation representations. The proposed model significantly improves performance over a model that does not sh… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
362
0

Year Published

2016
2016
2019
2019

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 559 publications
(366 citation statements)
references
References 19 publications
2
362
0
Order By: Relevance
“…While the lack of contextual support seems more fundamental, it could be addressed by either using syntax-based embeddings (Levy and Goldberg, 2014a) that can better pick up the specific context patterns characteristic for these relations, or by optimizing the input word embeddings for the task. This becomes a similar problem to joint training of representations from knowledge base structure and textual evidence (Perozzi et al, 2014;Toutanova et al, 2015).…”
Section: Resultsmentioning
confidence: 99%
“…While the lack of contextual support seems more fundamental, it could be addressed by either using syntax-based embeddings (Levy and Goldberg, 2014a) that can better pick up the specific context patterns characteristic for these relations, or by optimizing the input word embeddings for the task. This becomes a similar problem to joint training of representations from knowledge base structure and textual evidence (Perozzi et al, 2014;Toutanova et al, 2015).…”
Section: Resultsmentioning
confidence: 99%
“…There are recent work with the use of deep reinforcement learning on healthcare study (Li, 2017). Our approach is inspired by recent embedding learning work to jointly represent texts and knowledge base (Toutanova et al, 2015(Toutanova et al, , 2016, previous work on embedding transfer learning (Bordes et al, 2013) and noisecontrastive estimation . Lastly our work models insight extraction as a similarity measurement problem, and is inspired by similarity measurement work on pairwise word interaction modeling with deep neural networks.…”
Section: Related Workmentioning
confidence: 99%
“…propose the Neural Tensor Network (NTN), and Yang et al (2015) the Bilinear model using this technique. Other approaches modify the objective function or change the structure of the model in order to integrate distributional and relational information Fried and Duh, 2015;Toutanova and Chen, 2015). retrofit word vectors after they are trained according to distributional criteria.…”
Section: Related Workmentioning
confidence: 99%
“…Since these methods did not use any co-occurrence information from a text corpus, all entities were required to appear at least once in the training data, ruling out generalization to unseen entities 1 . More recently, Xu et al (2014) combined the training objective of SKIP-GRAM with the training objective of (Bordes et al, 2013) to incorporate lexical 1 There exists work on relation extraction and knowledgebase completion that combines structured relation triplets and logical rules with unstructured text using various forms of latent variable models (Riedel et al, 2013;Chang et al, 2014;Toutanova et al, 2015;Rocktäschel et al, 2015). 506 knowledge into word embeddings.…”
Section: Introductionmentioning
confidence: 99%