2019
DOI: 10.1162/tacl_a_00262
|View full text |Cite
|
Sign up to set email alerts
|

Complex Program Induction for Querying Knowledge Bases in the Absence of Gold Programs

Abstract: Recent years have seen increasingly complex question-answering on knowledge bases (KBQA) involving logical, quantitative, and comparative reasoning over KB subgraphs. Neural Program Induction (NPI) is a pragmatic approach toward modularizing the reasoning process by translating a complex natural language query into a multi-step executable program. While NPI has been commonly trained with the ‘‘gold’’ program or its sketch, for realistic KBQA applications such gold programs are expensive to obtain. There, pract… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
32
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 42 publications
(32 citation statements)
references
References 20 publications
(22 reference statements)
0
32
0
Order By: Relevance
“…For the label form, which is also known as "KBQA" or "KGQA", existing methods fall into two categories: information retrieval (Miller et al, 2016;Xu et al, 2019;Zhao et al, 2019b;Saxena et al, 2020) and semantic parsing (Berant et al, 2013;Yih et al, 2015;Liang et al, 2017;Guo et al, 2018;Saha et al, 2019). The former retrieves answer from KG by learning representations of question and graph, while the latter queries answer by parsing the question into logical form.…”
Section: Related Workmentioning
confidence: 99%
“…For the label form, which is also known as "KBQA" or "KGQA", existing methods fall into two categories: information retrieval (Miller et al, 2016;Xu et al, 2019;Zhao et al, 2019b;Saxena et al, 2020) and semantic parsing (Berant et al, 2013;Yih et al, 2015;Liang et al, 2017;Guo et al, 2018;Saha et al, 2019). The former retrieves answer from KG by learning representations of question and graph, while the latter queries answer by parsing the question into logical form.…”
Section: Related Workmentioning
confidence: 99%
“…Knowledge-guided generation. There is a growing interest in exploiting external knowledge to generate informative responses for applications such as dialog systems (He et al, 2017; and question answering (Das et al, 2017;Saha et al, 2019). Previous approaches inject knowl-edge through topic phrases , structured knowledge graphs and unstructured texts (Dinan et al, 2019;Hua et al, 2019).…”
Section: Related Workmentioning
confidence: 99%
“…Much of the prior work on text in causal settings has focused on text as a confounder, e.g. Saha et al (2019) andRoberts et al (2020). See Keith et al (2020) for a full overview of the text-as-confounder literature.…”
Section: Causal Inference With Text Datamentioning
confidence: 99%