2023
DOI: 10.1162/tacl_a_00530
|View full text |Cite
|
Sign up to set email alerts
|

Improving the Domain Adaptation of Retrieval Augmented Generation (RAG) Models for Open Domain Question Answering

Abstract: Retrieval Augment Generation (RAG) is a recent advancement in Open-Domain Question Answering (ODQA). RAG has only been trained and explored with a Wikipedia-based external knowledge base and is not optimized for use in other specialized domains such as healthcare and news. In this paper, we evaluate the impact of joint training of the retriever and generator components of RAG for the task of domain adaptation in ODQA. We propose RAG-end2end, an extension to RAG that can adapt to a domain-specific knowledge bas… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
2
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 24 publications
(13 citation statements)
references
References 38 publications
0
11
0
Order By: Relevance
“…The aim of the retrieval agent is to take a query and respond with a concise answer from the user's encoded external memories, enabling the Query Mode. It uses a method called retrieval augmented generation developed by Lewis et al [42] and used in state of the art question-answering systems [50,65].…”
Section: Retrieval Agentmentioning
confidence: 99%
“…The aim of the retrieval agent is to take a query and respond with a concise answer from the user's encoded external memories, enabling the Query Mode. It uses a method called retrieval augmented generation developed by Lewis et al [42] and used in state of the art question-answering systems [50,65].…”
Section: Retrieval Agentmentioning
confidence: 99%
“…This was further elaborated upon through research demonstrating RAG's capacity to improve factual accuracy and relevance in responses by leveraging up-to-date external databases [38], [39]. The effectiveness of RAG in domain-specific applications was analyzed, showing marked improvements in areas requiring specialized knowledge, such as scientific research and historical facts [40], [41]. Comparative studies between RAG and traditional LLMs underscored the former's superior performance in tasks demanding detailed, accurate information [42]- [44].…”
Section: Ragmentioning
confidence: 99%
“…This retrieved context is subsequently integrated into a generation model, such as a large language model to augment its capabilities in generating responses or completing tasks. Despite the widespread applications of RAG across various tasks including question-answering [35], RAG has not been directly utilized in the task of question retrieval in CQA. Our research aligns with both the first and third categories, leveraging metadata to improve question retrieval in community question answering.…”
Section: Related Workmentioning
confidence: 99%