2023
DOI: 10.5210/fm.v28i11.13346
|View full text |Cite
|
Sign up to set email alerts
|

Should ChatGPT be biased? Challenges and risks of bias in large language models

Emilio Ferrara

Abstract: As generative language models, exemplified by ChatGPT, continue to advance in their capabilities, the spotlight on biases inherent in these models intensifies. This paper delves into the distinctive challenges and risks associated with biases specifically in large-scale language models. We explore the origins of biases, stemming from factors such as training data, model specifications, algorithmic constraints, product design, and policy decisions. Our examination extends to the ethical implications arising fro… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 8 publications
references
References 0 publications
0
0
0
Order By: Relevance