2019
DOI: 10.1007/s41019-019-00109-w
|View full text |Cite
|
Sign up to set email alerts
|

Leveraging Domain Context for Question Answering Over Knowledge Graph

Abstract: With the growing availability of different knowledge graphs in a variety of domains, question answering over knowledge graph (KG-QA) becomes a prevalent information retrieval approach. Current KG-QA methods usually resort to semantic parsing, search or neural matching models. However, they cannot well tackle increasingly long input questions and complex information needs. In this work, we propose a new KG-QA approach, leveraging the rich domain context in the knowledge graph. We incorporate the new approach wi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
6
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 32 publications
(7 citation statements)
references
References 20 publications
0
6
0
Order By: Relevance
“…Tai et al (2015) introduced Tree-LSTM for computing tree embeddings bottom-up. It has then been applied for many tasks, including computer program translation , se-mantic tree structure learning (such as JSON or XML) (Woof and Chen, 2020) and supervised KG-QA tasks (Tong et al, 2019;Zafar et al, 2019;Athreya et al, 2020). In the latter context, Tree-LSTM is used to model the syntactic structure of the question.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Tai et al (2015) introduced Tree-LSTM for computing tree embeddings bottom-up. It has then been applied for many tasks, including computer program translation , se-mantic tree structure learning (such as JSON or XML) (Woof and Chen, 2020) and supervised KG-QA tasks (Tong et al, 2019;Zafar et al, 2019;Athreya et al, 2020). In the latter context, Tree-LSTM is used to model the syntactic structure of the question.…”
Section: Related Workmentioning
confidence: 99%
“…Second, most of the semantic parsing models do not leverage much of the underlying KG structure to predict the LF, as in Dong and Lapata (2016); Guo et al (2018). Yet, this contextual graph information is rich (Tong et al, 2019), and graph-based models leveraging this information yield promising results for KG-QA tasks (Vakulenko et al, 2019;Christmann et al, 2019). However these alternative approaches to semantic parsing, that rely on node classification, have their inherent limitations, as they handle less naturally certain queries (see Appendix C.2) and their output is less interpretable.…”
Section: Introductionmentioning
confidence: 99%
“…Relation classification (RC) is one of the most important techniques in natural language processing (NLP) and has various applications such as information retrieval (Ercan et al, 2019), question answering (Tong et al, 2019) and dialogue systems (Ma et al, 2019). Currently, conventional deep supervised (Zeng et al, 2014;Gormley et al, 2015) and distantly supervised (Mintz et al, 2009;Jiang et al, 2016;Ye and Ling, 2019a) RC models are widely used and achieve remarkable performance.…”
Section: Related Workmentioning
confidence: 99%
“…( 4) Content-introducing methods (Mou et al, 2016;Yao et al, 2017). ( 5) Knowledge-based methods (Zhou et al, 2018;Tong et al, 2019 account to facilitate conversation understanding. In the decoding process, these models always use a single decoder to generate the final response at a stroke in a left-to-right manner.…”
Section: Related Workmentioning
confidence: 99%