2023
DOI: 10.1002/leap.1547
|View full text |Cite
|
Sign up to set email alerts
|

Human‐ and AI‐based authorship: Principles and ethics

Abstract: recommendations for authorship are the dominant guidelines that guide who, and under what circumstances, an individual can be an author of an academic paper.• Large language models (LLMs) and AI, like ChatGPT, given their ability and versatility, pose a challenge to the human-based authorship model.• Several journals and publishers have already prohibited the assignment of authorship to AI, LLMs, and even ChatGPT, not recognizing them as valid authors.• We debate this premise, and asked ChatGPT to opine on thi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
7
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6

Relationship

1
5

Authors

Journals

citations
Cited by 15 publications
(7 citation statements)
references
References 44 publications
0
7
0
Order By: Relevance
“…The argument usually given for prohibiting a generative AI tool from being listed as an author is that a requirement of morally responsible publishing is that authors must be accountable for what they write, and generative AI tools lack accountability (Hosseini et al 2023a , b ; International Committee of Medical Journal Editors 2023 ; Liebrenz et al 2023 ; Lund et al 2023 ; Teixeira and Tsigaris 2023 ). The publishing industry seems to have reached a consensus that this is a new norm for publishing, which creates a strong presumption in favor of acceptance.…”
Section: Recommendationsmentioning
confidence: 99%
See 1 more Smart Citation
“…The argument usually given for prohibiting a generative AI tool from being listed as an author is that a requirement of morally responsible publishing is that authors must be accountable for what they write, and generative AI tools lack accountability (Hosseini et al 2023a , b ; International Committee of Medical Journal Editors 2023 ; Liebrenz et al 2023 ; Lund et al 2023 ; Teixeira and Tsigaris 2023 ). The publishing industry seems to have reached a consensus that this is a new norm for publishing, which creates a strong presumption in favor of acceptance.…”
Section: Recommendationsmentioning
confidence: 99%
“…There are many ways that authors might employ generative AI: to summarize literature, formulate ideas, organize outlines, produce drafts of text, or revise and refine text (Gordijn and Ten Have 2023 ; Lund et al 2023 ; Teixeira and Tsigaris 2023 ). Some possible uses do not seem significantly different from using internet searches, autocorrect tools, and grammar checks: authors might use generative AI to locate and understand scholarly material and draft text more efficiently.…”
Section: Recommendationsmentioning
confidence: 99%
“…Some others argue that using generative AI does not diminish human responsibility (Dien, 2023) and point to the overlooked contributions of the unnamed/invisible authors who trained these AI algorithms (Dwivedi et al, 2023;Lund et al, 2023). In some instances, generative AI is treated as a ghost contributor, acknowledging a passive contribution in content creation (Rahimi & Talebi Bezmin Abadi, 2023;Teixeira da Silva & Tsigaris, 2023).…”
Section: Can Generative Ai Be Credited As a Co-author?mentioning
confidence: 99%
“…least in the biomedical sciences, such as those by the International Committee of Medical Journal Editors (ICMJE), especially the aspect of accountability (Nature, 2023;Teixeira da Silva, 2023a;Teixeira da Silva & Tsigaris, 2023). Not only can ChatGPT and LLMs not be authors of academic papers, but their use (or reliance on them) must be explicitly acknowledged in academic papers (Brainard, 2023), although this relies on a 'dangerous' precedent, namely total and implicit trust in authors' honesty, a topic we discuss in more detail later.…”
mentioning
confidence: 99%
“…Not only can ChatGPT and LLMs not be authors of academic papers, but their use (or reliance on them) must be explicitly acknowledged in academic papers (Brainard, 2023), although this relies on a 'dangerous' precedent, namely total and implicit trust in authors' honesty, a topic we discuss in more detail later. Several publishers have put policies in place to limit the author-based recognition offered to ChatGPT and LLMs (Dwivedi et al, 2023;Teixeira da Silva & Tsigaris, 2023). It is now widely recognized by the publishing industry, at least those journals that subscribe to guidelines and recommendations by organizations such as the Committee on Publication Ethics (COPE), 1 ICMJE, 2 or the World Association of Medical Editors (WAME), 3 that AI or LLMs cannot be considered authors of academic papers.…”
mentioning
confidence: 99%