2023
DOI: 10.1101/2023.11.10.23298364
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Development of a Liver Disease-Specific Large Language Model Chat Interface using Retrieval Augmented Generation

Jin Ge,
Steve Sun,
Joseph Owens
et al.

Abstract: BackgroundLarge language models (LLMs) have significant capabilities in clinical information processing tasks. Commercially available LLMs, however, are not optimized for clinical uses and are prone to generating incorrect or hallucinatory information. Retrieval-augmented generation (RAG) is an enterprise architecture that allows embedding of customized data into LLMs. This approach “specializes” the LLMs and is thought to reduce hallucinations.MethodsWe developed “LiVersa,” a liver disease-specific LLM, by us… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(1 citation statement)
references
References 55 publications
0
1
0
Order By: Relevance
“…LLMs can produce hallucinations [99] and recommend products/tools that may not exist. Retrieval augmented generation (RAG) methods offer promise in mitigating some of these deficiencies [100]. RAG comprises a set of techniques and methodologies that enable a large language model to query an external data source, potentially improving the accuracy and relevance of responses while reducing the incidence of hallucinations and inappropriate responses [101].…”
Section: Challenges and Limitationsmentioning
confidence: 99%
“…LLMs can produce hallucinations [99] and recommend products/tools that may not exist. Retrieval augmented generation (RAG) methods offer promise in mitigating some of these deficiencies [100]. RAG comprises a set of techniques and methodologies that enable a large language model to query an external data source, potentially improving the accuracy and relevance of responses while reducing the incidence of hallucinations and inappropriate responses [101].…”
Section: Challenges and Limitationsmentioning
confidence: 99%