2022
DOI: 10.48550/arxiv.2204.04581
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Augmenting Pre-trained Language Models with QA-Memory for Open-Domain Question Answering

Abstract: Existing state-of-the-art methods for opendomain question-answering (ODQA) generally used a open book approach, in which information is retrieved from a large text corpus or knowledge base (KB), and then reasoned with to produce an answer. A recent alternative is to retrieve from a collection of previously-generated question-answer pairs. This has several practical advantages, including being more memory-and computeefficient. Question-answer pairs are also appealing in that they seem to be an intermediate betw… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
2
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(6 citation statements)
references
References 20 publications
0
2
0
Order By: Relevance
“…, (q k , a k ), concatenating them with the original question q , and then using a Transformer to fuse this information and generate a final combined answer a . The model we used, called QAMAT (for Question-Answer Memory Augmented Transformer), is described in detail elsewhere (Chen et al 2022).…”
Section: Methodsmentioning
confidence: 99%
“…, (q k , a k ), concatenating them with the original question q , and then using a Transformer to fuse this information and generate a final combined answer a . The model we used, called QAMAT (for Question-Answer Memory Augmented Transformer), is described in detail elsewhere (Chen et al 2022).…”
Section: Methodsmentioning
confidence: 99%
“…Question Generation Question generation has been successfully applied to various purposes, including augmenting question answering systems (Duan et al, 2017;Lewis et al, 2021), capturing implicit information written about text (Pyatkin et al, 2021), and building soft knowledge bases (Chen et al, 2022). In this work, we apply question generation to the task of discriminating edited sentences.…”
Section: Modelsmentioning
confidence: 99%
“…Question generation (QG) is a new NLP task that consists of generating a question that a provided document answers. There are various successful applications of this approach, including augmenting datasets to train question answering systems (Duan et al, 2017;Lewis et al, 2021), capturing implicit information written about text (Pyatkin et al, 2021), and building soft knowledge bases (Chen et al, 2022). Previous work in QG treated the underlying passages as static (Lewis et al, 2021), while real life documents are constantly updated (Dhingra et al, 2022).…”
Section: Introductionmentioning
confidence: 99%
“…To avoid the excessive computation cost of backpropagation over the massive external memory, we adopt an in-batch memory M B , dynamically constructed from the input examples in a batch. The small in-batch memory enables MuRAG to continuously update the memory encoder efficiently similar to TOME (de Jong et al, 2022) and QAMAT (Chen et al, 2022).…”
Section: Pre-trainingmentioning
confidence: 99%