2023
DOI: 10.1002/asi.24750
|View full text |Cite
|
Sign up to set email alerts
|

ChatGPTand a new academic reality:Artificial Intelligence‐writtenresearch papers and the ethics of the large language models in scholarly publishing

Abstract: This article discusses OpenAI's ChatGPT, a generative pre-trained transformer, which uses natural language processing to fulfill text-based user requests (i.e., a "chatbot"). The history and principles behind ChatGPT and similar models are discussed. This technology is then discussed in relation to its potential impact on academia and scholarly research and publishing. ChatGPT is seen as a potential model for the automated preparation of essays and other types of scholarly manuscripts. Potential ethical issues… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

5
85
1
3

Year Published

2023
2023
2024
2024

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 352 publications
(228 citation statements)
references
References 84 publications
5
85
1
3
Order By: Relevance
“…The argument usually given for prohibiting a generative AI tool from being listed as an author is that a requirement of morally responsible publishing is that authors must be accountable for what they write, and generative AI tools lack accountability. 1 The publishing industry seems to have reached a consensus that this is a new norm for publishing, which creates a strong presumption in favor of acceptance. While arguments can be made that generative AI possesses some aspects of authorial accountability, such as the capacity to provide an account or explanation of how an article was created, the aspect of accountability that generative AI genuinely lacks is moral responsibility.…”
Section: Llms or Other Generative Ai Tools Should Not Be Listed As Au...mentioning
confidence: 99%
“…The argument usually given for prohibiting a generative AI tool from being listed as an author is that a requirement of morally responsible publishing is that authors must be accountable for what they write, and generative AI tools lack accountability. 1 The publishing industry seems to have reached a consensus that this is a new norm for publishing, which creates a strong presumption in favor of acceptance. While arguments can be made that generative AI possesses some aspects of authorial accountability, such as the capacity to provide an account or explanation of how an article was created, the aspect of accountability that generative AI genuinely lacks is moral responsibility.…”
Section: Llms or Other Generative Ai Tools Should Not Be Listed As Au...mentioning
confidence: 99%
“…Nature even refers to ChatGPT as a threat (Nature, 2023). One of the concerns is the ability of AI to produce automated scholarly manuscripts (Lund et al, 2023), which could find their way into legitimate or predatory journals, as submitted by humans, but treating AI as a ghost. Although it could be impacted negatively by AI, there are also significant benefits from the use of AI in education (Benuyenah, 2023).…”
Section: Ai‐based Authorship: Does It Have Authorship Rights?mentioning
confidence: 99%
“…Similarly, the pReview software package was developed for automatically generating summarization, contribution detection, writing quality analysis, and potential related works of academic papers to support reviewers (Roberts & Fisher, 2020). Another study found that Natural Language Processing models to generate reviews for scientific papers could make the peer review task easier and more effective, but the reviews were not good enough to replace human experts (Yuan et al, 2021 or perhaps even strategies to help authors correct problems identified by reviewers (Lund et al, 2023). Nevertheless, reviewers normally agree to keep the manuscripts assessed confidential; as such, papers under assessment should not be uploaded to LLMs because they may be saved and incorporated into responses to future questions for other users (Hosseini & Horbach, 2023), or may even grant the LLM owner the right to repurpose the content.…”
Section: Reviewing a Submissionmentioning
confidence: 99%