2021
DOI: 10.48550/arxiv.2104.08762
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Case-based Reasoning for Natural Language Queries over Knowledge Bases

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
11
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(11 citation statements)
references
References 0 publications
0
11
0
Order By: Relevance
“…Sun et al 4 used a pipeline of subtasks, including question split and span prediction, for skeleton parsing. Das et al 33 used the pretrained T5 encoder-decoder model to directly produce a coarse skeleton.…”
Section: Related Workmentioning
confidence: 99%
“…Sun et al 4 used a pipeline of subtasks, including question split and span prediction, for skeleton parsing. Das et al 33 used the pretrained T5 encoder-decoder model to directly produce a coarse skeleton.…”
Section: Related Workmentioning
confidence: 99%
“…CBR-KBQA Models utilizing case-based reasoning methods [Aamodt and Plaza, 1994] first retrieve similar cases which are then used in synthesising the current answer. For question answering, such an architecture has been proposed by Das et al [2021]. The retrieved similar queries also contain their logical forms (e.g.…”
Section: Encodermentioning
confidence: 99%
“…Therefore, we focus on fast online adaptation of Textto-SQL models without parameter updates, until the next cycle of finetuning is deemed feasible. Recently, case-based reasoning (CBR), which utilizes a memory of past labeled examples as cases, has emerged as a promising paradigm of inference-time adaptation without finetuning (Das et al 2020(Das et al , 2021Pasupat, Zhang, and Guu 2021;Gupta et al 2021). CBR has been found effective for tasks like knowledge graph completion (KGC) (Das et al 2020), question answering over knowledge bases (KBQA) (Das et al 2021), task-oriented semantic parsing (Pasupat, Zhang, and Guu 2021;Gupta et al 2021), translation (Khandelwal et al 2021), and text-based games (Atzeni et al 2022).…”
Section: Introductionmentioning
confidence: 99%
“…Recently, case-based reasoning (CBR), which utilizes a memory of past labeled examples as cases, has emerged as a promising paradigm of inference-time adaptation without finetuning (Das et al 2020(Das et al , 2021Pasupat, Zhang, and Guu 2021;Gupta et al 2021). CBR has been found effective for tasks like knowledge graph completion (KGC) (Das et al 2020), question answering over knowledge bases (KBQA) (Das et al 2021), task-oriented semantic parsing (Pasupat, Zhang, and Guu 2021;Gupta et al 2021), translation (Khandelwal et al 2021), and text-based games (Atzeni et al 2022). However, many prior CBR approaches designed around Seq2Seq architectures simply concatenate input-output cases with the current input at the encoder (Das et al 2021;Pasupat, Zhang, and Guu 2021;Gupta et al 2021).…”
Section: Introductionmentioning
confidence: 99%