2021
DOI: 10.1007/978-3-030-77385-4_21
|View full text |Cite
|
Sign up to set email alerts
|

Context Transformer with Stacked Pointer Networks for Conversational Question Answering over Knowledge Graphs

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
16
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
3
2

Relationship

1
7

Authors

Journals

citations
Cited by 29 publications
(18 citation statements)
references
References 25 publications
0
16
0
Order By: Relevance
“…The short answer section presents early Conversational QA (ConvQA) systems then discusses the transition to more recent Transformer architectures based on pre-trained language models. Section 5.1.5 then examines how ConvQA is performed over structured knowledge graphs including systems that use key-value networks (Saha et al, 2018), generative approaches, and logical query representations (Plepi et al, 2021). Following this, we discuss open retrieval from large text corpora as part of the QA process.…”
Section: Determining Next Utterancesmentioning
confidence: 99%
See 1 more Smart Citation
“…The short answer section presents early Conversational QA (ConvQA) systems then discusses the transition to more recent Transformer architectures based on pre-trained language models. Section 5.1.5 then examines how ConvQA is performed over structured knowledge graphs including systems that use key-value networks (Saha et al, 2018), generative approaches, and logical query representations (Plepi et al, 2021). Following this, we discuss open retrieval from large text corpora as part of the QA process.…”
Section: Determining Next Utterancesmentioning
confidence: 99%
“…For instance, Shen et al (2019) presented the Multi-task Semantic Parsing (MaSP) model, performing both entity typing and coreference resolution together with semantic parsing. A subsequent multi-task model is CARTON (Context trAns-formeR sTacked pOinter Networks) (Plepi et al, 2021), with an encoder and decoder model to model the conversational representations. A series of three stacked pointer networks focus on the logical form needed for execution (types, predicates, and entities).…”
Section: Conversational Qa On Knowledge Graphsmentioning
confidence: 99%
“…For the logical forms, we employ a grammar that can be used to capture the entire context of the question with the minimum number of actions. We prefer not to reinvent the wheel, and therefore we adopted the grammar from existing stateof-the-art question answering systems (Kacupaj et al, 2021b;Plepi et al, 2021). However, we do not employ all the actions from these works; Table 7 illustrates the complete grammar with all the defined actions that we used for all three answer verbalization datasets.…”
Section: A Appendixmentioning
confidence: 99%
“…This work argues that leveraging content from both sources allows the model for better convergence and provides new, improved results. Furthermore, we complement our hypothesis with multi-task learning paradigms, since multi-task learning has been quite efficient for different system architectures (Cipolla et al, 2018), including question answering systems (Plepi et al, 2021;Kacupaj et al, 2021b). Our framework can receive two (e.g., question & query) or even one (e.g., question) input.…”
Section: Introductionmentioning
confidence: 99%
“…State-of-the-art works on ConvQA make use of single information sources: either curated knowledge bases (KB) [8,10,18,20,22,26,29,41,44], or unstructured text collections [5,13,31,33,34], or Web tables [14,27], but only one of these. Questions đť‘ž 0 and đť‘ž 2 can be answered more conveniently using KBs like Wikidata [53], YAGO [46], or DBpedia [1], that store factual world knowledge in compact RDF triples.…”
mentioning
confidence: 99%