Artificial intelligence (AI) technologies to help authors improve the preparation and quality of their manuscripts and published articles are rapidly increasing in number and sophistication. These include tools to assist with writing, grammar, language, references, statistical analysis, and reporting standards. Editors and publishers also use AI-assisted tools for myriad purposes, including to screen submissions for problems (eg, plagiarism, image manipulation, ethical issues), triage submissions, validate references, edit, and code content for publication in different media and to facilitate postpublication search and discoverability.1
In November 2022, OpenAI released a new open source, natural language processing tool called ChatGPT.2,3 ChatGPT is an evolution of a chatbot that is designed to simulate human conversation in response to prompts or questions (GPT stands for “generative pretrained transformer”). The release has prompted immediate excitement about its many potential uses4 but also trepidation about potential misuse, such as concerns about using the language model to cheat on homework assignments, write student essays, and take examinations, including medical licensing examinations.5 In January 2023, Nature reported on 2 preprints and 2 articles published in the science and health fields that included ChatGPT as a bylined author.6 Each of these includes an affiliation for ChatGPT, and 1 of the articles includes an email address for the nonhuman “author.” According to Nature, that article’s inclusion of ChatGPT in the author byline was an “error that will soon be corrected.”6 However, these articles and their nonhuman “authors” have already been indexed in PubMed and Google Scholar.
Nature has since defined a policy to guide the use of large-scale language models in scientific publication, which prohibits naming of such tools as a “credited author on a research paper” because “attribution of authorship carries with it accountability for the work, and AI tools cannot take such responsibility.”7 The policy also advises researchers who use these tools to document this use in the Methods or Acknowledgment sections of manuscripts.7 Other journals8,9 and organizations10 are swiftly developing policies that ban inclusion of these nonhuman technologies as “authors” and that range from prohibiting the inclusion of AI-generated text in submitted work8 to requiring full transparency, responsibility, and accountability for how such tools are used and reported in scholarly publication.9,10 The International Conference on Machine Learning, which issues calls for papers to be reviewed and discussed at its conferences, has also announced a new policy: “Papers that include text generated from a large-scale language model (LLM) such as ChatGPT are prohibited unless the produced text is presented as a part of the paper’s experimental analysis.”11 The society notes that this policy has generated a flurry of questions and that it plans “to investigate and discuss the impact, both positive and negative, of LLMs on reviewing and publishing in the field of machine learning and AI” and will revisit the policy in the future.11
The scholarly publishing community has quickly reported concerns about potential misuse of these language models in scientific publication.1,12-14 Individuals have experimented by asking ChatGPT a series of questions about controversial or important topics (eg, whether childhood vaccination causes autism) as well as specific publishing-related technical and ethical questions.9,10,12 Their results showed that ChatGPT’s text responses to questions, while mostly well written, are formulaic (which was not easily discernible), not up to date, false or fabricated, without accurate or complete references, and worse, with concocted nonexistent evidence for claims or statements it makes. OpenAI acknowledges some of the language model’s limitations, including providing “plausible-sounding but incorrect or nonsensical answers,” and that the recent release is part of an open iterative deployment intended for human use, interaction, and feedback to improve it.2 That cautionary acknowledgment is a clear signal that the model is not ready to be used as a source of trusted information, and certainly not without transparency and human accountability for its use.
To address concerns about the use of AI and language models in the writing of manuscripts, JAMA and the JAMA Network journals have updated relevant policies in the journals’ Instructions for Authors.15 These journals have provided guidance and defined criteria for authorship credit and accountability for many decades,16-18 following the recommendations of the International Committee of Medical Journal Editors19 as well as guidance for transparent reporting of writing or editing assistance.17 These guidance and criteria have continued to evolve to address changes in the conduct, complexity, and reporting of research and related concerns about authorship responsibility and accountability.20
In response to this latest technology-driven concern, the following sections of the JAMA Network Instructions for Authors15 have been updated:
Author Responsibilities
Nonhuman artificial intelligence, language models, machine learning, or similar technologies do not qualify for authorship.
If these models or tools are used to create content or assist with writing or manuscript preparation, authors must take responsibility for the integrity of the content generated by these tools. Authors should report the use of artificial intelligence, language models, machine learning, or similar technologies to create content or assist with writing or editing of manuscripts in the Acknowledgment section or the Methods section if this is part of formal research design or methods.
This should include a description of the content that was created or edited and the name of the language model or tool, version and extension numbers, and manufacturer. (Note: this does not include basic tools for checking grammar, spelling, references, etc.)
Reproduced and Re-created Material
The submission and publication of content created by artificial intelligence, language models, machine learning, or similar technologies is discouraged, unless part of formal research design or methods, and is not permitted without clear description of the content that was created and the name of the model or tool, version and extension numbers, and manufacturer. Authors must take responsibility for the integrity of the content generated by these models and tools.
Image Integrity
The submission and publication of images created by artificial intelligence, machine learning tools, or similar technologies is discouraged, unless part of formal research design or methods, and is not permitted without clear description of the content that was created and the name of the model or tool, version and extension numbers, and manufacturer. Authors must take responsibility for the integrity of the content generated by these models and tools.
The JAMA Network journals have relevant policies for reporting use of statistical analysis software and recommend that authors follow the EQUATOR Network reporting guidelines,15 including those with guidance for trials that include AI interventions (eg, CONSORT-AI and SPIRIT-AI)21,22 and machine learning in modeling studies (eg, MI-CLAIM).23 The EQUATOR Network has several other reporting guidelines in development for prognostic and diagnostic studies that use AI and machine learning, such as STARD-AI and TRIPOD-AI.24 JAMA Network editors will continue to review and evolve editorial and publication policies in response to these developments with the aim of maintaining the highest standards of transparency and scientific integrity.
Transformative, disruptive technologies, like AI language models, create promise and opportunities as well as risks and threats for all involved in the scientific enterprise. Calls for journals to implement screening for AI-generated content will likely escalate,10 especially for journals that have been targets of paper mills25 and other unscrupulous or fraudulent practices. But with large investments in further development,26 AI tools may be capable of evading any such screens. Regardless, AI technologies have existed for some time, will be further and faster developed, and will continue to be used in all stages of research and the dissemination of information, hopefully with innovative advances that offset any perils. In this era of pervasive misinformation and mistrust, responsible use of AI language models and transparent reporting of how these tools are used in the creation of information and publication are vital to promote and protect the credibility and integrity of medical research and trust in medical knowledge.
Corresponding Author: Annette Flanagin, RN, MA (annette.flanagin@jamanetwork.org).
Published Online: January 31, 2023. doi:10.1001/jama.2023.1344
Conflict of Interest Disclosures: None reported.
Additional Contributions: We thank Joseph P. Thornton, JD, for reviewing the manuscript, and Amanda Ehrhardt and Kirby Snell for updating the Instructions for Authors for all JAMA Network journals. They all work for the JAMA Network and did not receive additional compensation for their contributions.
5.Gilson
A, Safranek
C, Huang
T. How does ChatGPT perform on the medical licensing exams? the implications of large language models for medical education and knowledge assessment.
medRxiv. Preprint posted December 26, 2022. doi:
10.1101/2022.12.23.22283901Google Scholar 10.Zielinski
C, Winker
M, Aggarwal
R,
et al; WAME Board.
Chatbots, ChatGPT, and scholarly manuscripts: WAME recommendations on ChatGPT and chatbots in relation to scholarly publications. January 20, 2023. Accessed January 28, 2023.
https://wame.org/page3.php?id=106 18.Authorship responsibility. In: Christiansen S, Iverson C, Flanagin A, et al.
AMA Manual of Style: A Guide for Authors and Editors. 11th ed. Oxford University Press; 2020. Updated February 2022.
http://www.amamanualofstyle.com 19.International Committee of Medical Journal Editors. Recommendations for the conduct, reporting, editing, and publication of scholarly work in medical journals. Updated May 2022. Accessed January 25, 2023.
https://www.icmje.org/recommendations