2023
DOI: 10.48550/arxiv.2303.11156
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Can AI-Generated Text be Reliably Detected?

Abstract: The rapid progress of Large Language Models (LLMs) has made them capable of performing astonishingly well on various tasks including document completion and question answering. The unregulated use of these models, however, can potentially lead to malicious consequences such as plagiarism, generating fake news, spamming, etc. Therefore, reliable detection of AI-generated text can be critical to ensure the responsible use of LLMs. Recent works attempt to tackle this problem either using certain model signatures … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
34
0
4

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 44 publications
(38 citation statements)
references
References 16 publications
0
34
0
4
Order By: Relevance
“…Academics may view it in a more negative light and a tool which is perhaps more likely to jeopardise authorial integrity. At the time of writing, new GenAI detection software is being developed and tested to identify AI-generated text (Sadasivan et al, 2023;OpenAI, 2023a, b, c). It may therefore be the case that, in time, such software may nullify the overarching issue of the use of GenAI in academia.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Academics may view it in a more negative light and a tool which is perhaps more likely to jeopardise authorial integrity. At the time of writing, new GenAI detection software is being developed and tested to identify AI-generated text (Sadasivan et al, 2023;OpenAI, 2023a, b, c). It may therefore be the case that, in time, such software may nullify the overarching issue of the use of GenAI in academia.…”
Section: Discussionmentioning
confidence: 99%
“…The emergence of these results, and similar studies in this area (Gao et al, 2022;Malinka et al, 2022;Fowler, 2023;Biswas, 2023), have raised concerns across HE regarding the potential misuse of ChatGPT to write summative assessments (Dwivedi et al, 2023). Sadasivan et al (2023) tested 10,000 text samples, testing the efficacy of ten distinct detection modalities. The findings highlighted that, although certain detection methods showcased commendable accuracy, none proved to be entirely infallible.…”
Section: Introductionmentioning
confidence: 89%
“…Regardless of personal opinions, AI tools such as ChatGPT have become embedded in daily human communication. However, unlike scenarios where students employ AI to complete assignments, scientific writing has traditionally been viewed as a means of disseminating ideas and new knowledge to humanity, raising concerns within the community about ethical issues more than just plagiarism (Sadasivan et al, 2023).…”
Section: Discussionmentioning
confidence: 99%
“…AI PUTS DEMOCRATIC SYSTEMS AT RISK Modern AI models are increasingly capable of mimicking targeted individuals [1], [2], [3], [4], [5]. Users of these models can influence policymakers by impersonating constituents and can influence populations via compelling disinformation.…”
Section: IImentioning
confidence: 99%