2023
DOI: 10.31234/osf.io/6fd2y
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

The impact of generative artificial intelligence on socioeconomic inequalities and policy making

Valerio Capraro,
Austin Lentsch,
Daron Acemoglu
et al.

Abstract: Generative artificial intelligence, including chatbots like ChatGPT, has the potential to both exacerbate and ameliorate existing socioeconomic inequalities. In this article, we provide a state-of-the-art interdisciplinary overview of the probable impacts of generative AI on four critical domains: work, education, health, and information. Our goal is to warn about how generative AI could worsen existing inequalities while illuminating directions for using AI to resolve pervasive social problems. Generative AI … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
0
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 97 publications
0
0
0
Order By: Relevance
“…In this context, another risk is that the results and texts produced by these systems may not always be accurate, up-to-date, or may contain biases [ 42 , 64 , 68 ]. Especially, generative AI systems such as ChatGPT can produce responses in a reasonable manner even for nonexistent or inaccurate information, a phenomenon known as hallucination in AI systems [ 25 , 45 , 69 , 70 ]. Athaluri et al [ 71 ] attempted to evaluate the frequency of AI hallucination in a scenario created using ChatGPT, and in the analysis of the 178 references in the generated result, it was shown that 69 did not have a DOI and 28 did not exist.…”
Section: Challengesmentioning
confidence: 99%
See 2 more Smart Citations
“…In this context, another risk is that the results and texts produced by these systems may not always be accurate, up-to-date, or may contain biases [ 42 , 64 , 68 ]. Especially, generative AI systems such as ChatGPT can produce responses in a reasonable manner even for nonexistent or inaccurate information, a phenomenon known as hallucination in AI systems [ 25 , 45 , 69 , 70 ]. Athaluri et al [ 71 ] attempted to evaluate the frequency of AI hallucination in a scenario created using ChatGPT, and in the analysis of the 178 references in the generated result, it was shown that 69 did not have a DOI and 28 did not exist.…”
Section: Challengesmentioning
confidence: 99%
“…On the other hand, this culture will also prevent clinicians in the healthcare domain from developing overly reliant behavior towards AI systems, which could have long-term negative consequences. This approach will strengthen the human-complementary path, which has the potential to lead to greater prosperity in society in the long term, particularly by enhancing the skills of doctors to be more effective in the human-machine balance and to provide higher quality healthcare [ 44 , 45 ].…”
Section: Participatory Society-in-the-loop Managementmentioning
confidence: 99%
See 1 more Smart Citation