2014 IEEE Global Conference on Signal and Information Processing (GlobalSIP) 2014
DOI: 10.1109/globalsip.2014.7032187
|View full text |Cite
|
Sign up to set email alerts
|

Deep learning of knowledge graph embeddings for semantic parsing of Twitter dialogs

Abstract: This paper presents a novel method to learn neural knowledge graph embeddings. The embeddings are used to compute semantic relatedness in a coherence-based semantic parser. The approach learns embeddings directly from structured knowledge representations. A deep neural network approach known as Deep Structured Semantic Modeling (DSSM) is used to scale the approach to learn neural embeddings for all of the concepts (pages) of Wikipedia. Experiments on Twitter dialogs show a 23.6% reduction in semantic parsing e… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
13
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 27 publications
(14 citation statements)
references
References 20 publications
0
13
0
Order By: Relevance
“…Despite many years of research, the slot filling task in SLU is still a challenging problem, and this has motivated the recent application of a number of very successful continuous-space, neural net, and deep learning approaches, e.g. [13], [15], [24], [30], [56], [64].…”
mentioning
confidence: 99%
“…Despite many years of research, the slot filling task in SLU is still a challenging problem, and this has motivated the recent application of a number of very successful continuous-space, neural net, and deep learning approaches, e.g. [13], [15], [24], [30], [56], [64].…”
mentioning
confidence: 99%
“…Mapping graph representations into continuous space has been proposed in previous work [134,52]. For this purpose, an autoencoder is often used to learn a compact feature representation for a given input.…”
Section: Graph Embedding Using Autoencodersmentioning
confidence: 99%
“…In [52], the authors use neural embeddings of a semantic knowledge-graph for the purpose of semantic parsing. In [139], the authors propose a 'generalized autoencoder' which extends the traditional autoencoder by taking manifold information into the reconstruction term of the autoencoder training.…”
Section: Previous Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Feeding into the input vector is a feature vector which contains linked entities, relations, entity types, and a bag-of-words representation of its description. Entity embeddings can also be extracted from this model once it is trained [51].…”
Section: Entity Recognition and Disambiguationmentioning
confidence: 99%