he artificial-intelligence (AI) chatbot ChatGPT that has taken the world by storm has made its formal debut in the scientific literature -racking up at least four authorship credits on published papers and preprints.Journal editors, researchers and publishers are now debating the place of such AI tools in the published literature, and whether it's appropriate to cite the bot as an author. Publishers are racing to create policies for the chatbot, which was released as a free-to-use tool last November by tech company OpenAI in San Francisco, California.ChatGPT is a large language model (LLM), which generates convincing sentences by mimicking the statistical patterns of language in a huge database of text collated from the Internet. The bot is already disrupting sectors including academia: in particular, it is raising questions about the future of university essays and research production.Publishers and preprint servers contacted by Nature's news team agree that AIs such as ChatGPT do not fulfil the criteria for a study author, because they cannot take responsibility for the content and integrity of scientific papers. But some publishers say that an AI's contribution to writing papers can be acknowledged in sections other than the author list. (Nature's news team is editorially independent of its journal team and its publisher, Springer Nature.)In one case, an editor told Nature that ChatGPT had been cited as a co-author in error, and that the journal would correct this.
In December, computational biologists Casey Greene and Milton Pividori embarked on an unusual experiment: they asked an assistant who was not a scientist to help them improve three of their research papers. Their assiduous aide suggested revisions to sections of documents in seconds; each manuscript took about five minutes to review. In one biology manuscript, their helper even spotted a mistake in a reference to an equation. The trial didn't always run smoothly, but the final manuscripts were easier to read -and the fees were modest, at less than US$0.50 per document.This assistant, as Greene and Pividori reported in a preprint 1 on 23 January, is not a person but an artificial-intelligence (AI) algorithm called GPT-3, first released in 2020. It is one of the much-hyped generative AI chatbot-style tools that can churn out convincingly fluent text, whether asked to produce prose, poetry, computer code or -as in the scientists' case -to edit research papers.The most famous of these tools, also known as large language models, or LLMs, is ChatGPT, a version of GPT-3 that shot to fame after its release in November last year because it was made free and easily accessible. Other generative AIs can produce images, or sounds."I'm really impressed," says Pividori, who
THE PROMISE AND PERIL OF GENERATIVE AIResearchers are excited but apprehensive about how tools such as ChatGPT could transform science and society. By Chris Stokel-Walker and Richard Van Noorden
With most coronaviruses, recovery confers a degree of immunity to reinfection. But a small number of patients have caught covid-19 for a second time. Chris Stokel-Walker looks at what we know and how worried we should be
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.