2023
DOI: 10.31234/osf.io/b58ex
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Transparency Guidance for ChatGPT Usage in Scientific Writing

Abstract: The use of text-generating Large Language Models (LLM), such as ChatGPT, in scholarly writing presents challenges to transparency and credibility. Journals and institutes are in need to revise their policies on whether the usage of such tools is acceptable throughout the research workflow and to provide guidance on how to safeguard transparency and credibility when LLMs are allowed to assist researchers. The present practical guideline should help those scholars who use LLMs and journals and institutes that al… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
12
1

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 23 publications
(18 citation statements)
references
References 7 publications
0
12
1
Order By: Relevance
“…First, superficial, inaccurate, or incorrect content was frequently cited as a shortcoming of ChatGPT use in scientific writing [ 14 , 28 , 29 , 40 , 60 ]. The ethical issues including the risk of bias based on training datasets and plagiarism were also frequently mentioned, aside from the lack of transparency regarding content generation, which justifies the description of ChatGPT, on occasions, as a black box technology [ 14 , 25 , 26 , 40 , 44 , 45 , 46 , 47 , 48 , 55 , 60 , 63 , 65 , 72 ]. Importantly, the concept of ChatGPT hallucination could be risky if the generated content is not thoroughly evaluated by researchers and health providers with proper expertise [ 37 , 56 , 73 , 77 , 79 ].…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…First, superficial, inaccurate, or incorrect content was frequently cited as a shortcoming of ChatGPT use in scientific writing [ 14 , 28 , 29 , 40 , 60 ]. The ethical issues including the risk of bias based on training datasets and plagiarism were also frequently mentioned, aside from the lack of transparency regarding content generation, which justifies the description of ChatGPT, on occasions, as a black box technology [ 14 , 25 , 26 , 40 , 44 , 45 , 46 , 47 , 48 , 55 , 60 , 63 , 65 , 72 ]. Importantly, the concept of ChatGPT hallucination could be risky if the generated content is not thoroughly evaluated by researchers and health providers with proper expertise [ 37 , 56 , 73 , 77 , 79 ].…”
Section: Discussionmentioning
confidence: 99%
“…Third, the generation of non-original, over-detailed, or excessive content can be an additional burden for researchers who should carefully supervise the ChatGPT-generated content [ 14 , 24 , 25 , 26 , 65 , 71 ]. This can be addressed by supplying ChatGPT with proper prompts (text input), since varying responses might be generated based on the exact approach of prompt construction [ 72 , 82 ].…”
Section: Discussionmentioning
confidence: 99%
“…On the other hand, the use of ChatGPT in academic writing and scientific research should be done in light of the following current limitations that could compromise the quality of research: First, superficial, inaccurate or incorrect content was frequently cited as a shortcoming of ChatGPT use [14,47,49,77,79]. The ethical issues including the risk of bias based on training datasets, and plagiarism were frequently mentioned, besides the lack of transparency described on occasions as a black box technology [14,27,[29][30][31][32][33]35,36,39,57,58,77,79]. Importantly, the concept of ChatGPT hallucination was mentioned which can be risky if not evaluated properly by researchers and health providers with proper expertise [59,60,65,73,74].…”
Section: Discussionmentioning
confidence: 99%
“…On the other hand, the use of ChatGPT in academic writing and scientific research should be done in light of the following current limitations that could compromise the quality of research: First, superficial, inaccurate or incorrect content was frequently cited as a shortcoming of ChatGPT use [14,47,49,77,79]. The ethical issues including the risk of bias based on training datasets, and plagiarism were frequently mentioned, besides the lack of transparency described on occasions as a black box technology [14,27,[29][30][31][32][33]35,36,39,57,58,77,79]. Importantly, the concept of ChatGPT hallucination was mentioned which can be risky if not evaluated properly by researchers and health providers with proper expertise [59,60,65,73,74].…”
Section: Discussionmentioning
confidence: 99%
“…The disapproval of including ChatGPT or any other LLM in the list of authors was clearly explained in Science, Nature, and the Lancet editorials referring to its use as a scientific misconduct, and this view was echoed by many scientists [28,30,38,70,77]. In case of ChatGPT use in the research process, several records advocated the need for proper and concise disclosure and documentation of ChatGPT or LLM use in the methodology or acknowledgement sections [35,39,70]. A noteworthy and comprehensive record by Borji can be used as a categorical guide for the issues and concerns of ChatGPT use especially in scientific writing [24].…”
Section: Discussionmentioning
confidence: 99%