2024
DOI: 10.1007/s10462-024-10824-0
|View full text |Cite
|
Sign up to set email alerts
|

A survey of safety and trustworthiness of large language models through the lens of verification and validation

Xiaowei Huang,
Wenjie Ruan,
Wei Huang
et al.

Abstract: Large language models (LLMs) have exploded a new heatwave of AI for their ability to engage end-users in human-level conversations with detailed and articulate answers across many knowledge domains. In response to their fast adoption in many industrial applications, this survey concerns their safety and trustworthiness. First, we review known vulnerabilities and limitations of the LLMs, categorising them into inherent issues, attacks, and unintended bugs. Then, we consider if and how the Verification and Valid… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
references
References 172 publications
0
0
0
Order By: Relevance