2024
DOI: 10.31219/osf.io/mc762
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Comparative Analysis to Evaluate Bias and Fairness Across Large Language Models with Benchmarks

陳文意,
黃兆明

Abstract: This study performs a comprehensive evaluation of bias and fairness within Large Language Models (LLMs), including ChatGPT-4, Google Gemini, and Llama 2, utilizing the Google BIG-Bench benchmark. Our analysis reveals varied levels of biases across models, with disparities particularly notable in dimensions such as gender, race, and ethnicity. The Google BIG-Bench benchmark proved instrumental in identifying these biases, though its effectiveness is tempered by challenges in capturing the sophisticated manifest… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 20 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?