2020
DOI: 10.1016/j.websem.2020.100612
|View full text |Cite
|
Sign up to set email alerts
|

Less is more: Data-efficient complex question answering over knowledge bases

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
10
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 23 publications
(10 citation statements)
references
References 24 publications
0
10
0
Order By: Relevance
“…The Q pre was used to train the LSTM-based programmer, which was further optimized through the Policy Gradient (PG) algorithm (Williams, 1992;Sutton et al, 2000) with another 1% unannotated questions from the training set. We denoted this model by PG, which is also a model variant proposed in (Hua et al, 2020). We trained the meta learner on another 2K training samples (Q meta in Alg.1), representing only approx.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…The Q pre was used to train the LSTM-based programmer, which was further optimized through the Policy Gradient (PG) algorithm (Williams, 1992;Sutton et al, 2000) with another 1% unannotated questions from the training set. We denoted this model by PG, which is also a model variant proposed in (Hua et al, 2020). We trained the meta learner on another 2K training samples (Q meta in Alg.1), representing only approx.…”
Section: Methodsmentioning
confidence: 99%
“…NSM annotates the questions and then anchors the model to the high-reward programs by assigning them with a deterministic probability. Neural-Symbolic Complex Question Answering (NS-CQA) model (Hua et al, 2020) augments the NPI approach with a memory buffer to alleviate the sparse reward and data inefficiency problems appear in the CQA task. Complex Imperative Compared with the NPI models, our model can flexibly adapt to the question under processing.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Complex questions contain multiple entities and relationships; therefore, C-KBQA involves multiple knowledge triples [ 12 ], including operations, such as multi-hop, aggregation, logical operations, and reasoning [ 13 ]. At present, the mainstream methods of C-KBQA include semantic parsing, information retrieval, and template matching [ 14 ].…”
Section: Related Workmentioning
confidence: 99%
“…Traditional approaches to KGQA rely on semantic parsing (SP) to translate natural language into a logical form. Weakly supervised SP is a wellstudied topic with increasing interest in applying Reinforcement Learning (RL) (Hua et al, 2020;Agarwal et al, 2019). ER is rarely considered in the scope of surveyed solutions, and if it is, it's treated as an independent component and not included in the weak supervision scope (Ansari et al, 2019).…”
Section: Related Workmentioning
confidence: 99%