[Skip to Navigation]
Sign In
January 31, 2023

Nonhuman “Authors” and Implications for the Integrity of Scientific Publication and Medical Knowledge

Author Affiliations
  • 1Ms Flanagin is Executive Managing Editor, Dr Bibbins-Domingo is Editor in Chief, and Dr Berkwits is Electronic Editor, JAMA and the JAMA Network, and Ms Christiansen is Managing Editor, JAMA
JAMA. 2023;329(8):637-639. doi:10.1001/jama.2023.1344

Artificial intelligence (AI) technologies to help authors improve the preparation and quality of their manuscripts and published articles are rapidly increasing in number and sophistication. These include tools to assist with writing, grammar, language, references, statistical analysis, and reporting standards. Editors and publishers also use AI-assisted tools for myriad purposes, including to screen submissions for problems (eg, plagiarism, image manipulation, ethical issues), triage submissions, validate references, edit, and code content for publication in different media and to facilitate postpublication search and discoverability.1

In November 2022, OpenAI released a new open source, natural language processing tool called ChatGPT.2,3 ChatGPT is an evolution of a chatbot that is designed to simulate human conversation in response to prompts or questions (GPT stands for “generative pretrained transformer”). The release has prompted immediate excitement about its many potential uses4 but also trepidation about potential misuse, such as concerns about using the language model to cheat on homework assignments, write student essays, and take examinations, including medical licensing examinations.5 In January 2023, Nature reported on 2 preprints and 2 articles published in the science and health fields that included ChatGPT as a bylined author.6 Each of these includes an affiliation for ChatGPT, and 1 of the articles includes an email address for the nonhuman “author.” According to Nature, that article’s inclusion of ChatGPT in the author byline was an “error that will soon be corrected.”6 However, these articles and their nonhuman “authors” have already been indexed in PubMed and Google Scholar.

Nature has since defined a policy to guide the use of large-scale language models in scientific publication, which prohibits naming of such tools as a “credited author on a research paper” because “attribution of authorship carries with it accountability for the work, and AI tools cannot take such responsibility.”7 The policy also advises researchers who use these tools to document this use in the Methods or Acknowledgment sections of manuscripts.7 Other journals8,9 and organizations10 are swiftly developing policies that ban inclusion of these nonhuman technologies as “authors” and that range from prohibiting the inclusion of AI-generated text in submitted work8 to requiring full transparency, responsibility, and accountability for how such tools are used and reported in scholarly publication.9,10 The International Conference on Machine Learning, which issues calls for papers to be reviewed and discussed at its conferences, has also announced a new policy: “Papers that include text generated from a large-scale language model (LLM) such as ChatGPT are prohibited unless the produced text is presented as a part of the paper’s experimental analysis.”11 The society notes that this policy has generated a flurry of questions and that it plans “to investigate and discuss the impact, both positive and negative, of LLMs on reviewing and publishing in the field of machine learning and AI” and will revisit the policy in the future.11

The scholarly publishing community has quickly reported concerns about potential misuse of these language models in scientific publication.1,12-14 Individuals have experimented by asking ChatGPT a series of questions about controversial or important topics (eg, whether childhood vaccination causes autism) as well as specific publishing-related technical and ethical questions.9,10,12 Their results showed that ChatGPT’s text responses to questions, while mostly well written, are formulaic (which was not easily discernible), not up to date, false or fabricated, without accurate or complete references, and worse, with concocted nonexistent evidence for claims or statements it makes. OpenAI acknowledges some of the language model’s limitations, including providing “plausible-sounding but incorrect or nonsensical answers,” and that the recent release is part of an open iterative deployment intended for human use, interaction, and feedback to improve it.2 That cautionary acknowledgment is a clear signal that the model is not ready to be used as a source of trusted information, and certainly not without transparency and human accountability for its use.

To address concerns about the use of AI and language models in the writing of manuscripts, JAMA and the JAMA Network journals have updated relevant policies in the journals’ Instructions for Authors.15 These journals have provided guidance and defined criteria for authorship credit and accountability for many decades,16-18 following the recommendations of the International Committee of Medical Journal Editors19 as well as guidance for transparent reporting of writing or editing assistance.17 These guidance and criteria have continued to evolve to address changes in the conduct, complexity, and reporting of research and related concerns about authorship responsibility and accountability.20

In response to this latest technology-driven concern, the following sections of the JAMA Network Instructions for Authors15 have been updated:

Author Responsibilities

Nonhuman artificial intelligence, language models, machine learning, or similar technologies do not qualify for authorship.

If these models or tools are used to create content or assist with writing or manuscript preparation, authors must take responsibility for the integrity of the content generated by these tools. Authors should report the use of artificial intelligence, language models, machine learning, or similar technologies to create content or assist with writing or editing of manuscripts in the Acknowledgment section or the Methods section if this is part of formal research design or methods.

This should include a description of the content that was created or edited and the name of the language model or tool, version and extension numbers, and manufacturer. (Note: this does not include basic tools for checking grammar, spelling, references, etc.)

Reproduced and Re-created Material

The submission and publication of content created by artificial intelligence, language models, machine learning, or similar technologies is discouraged, unless part of formal research design or methods, and is not permitted without clear description of the content that was created and the name of the model or tool, version and extension numbers, and manufacturer. Authors must take responsibility for the integrity of the content generated by these models and tools.

Image Integrity

The submission and publication of images created by artificial intelligence, machine learning tools, or similar technologies is discouraged, unless part of formal research design or methods, and is not permitted without clear description of the content that was created and the name of the model or tool, version and extension numbers, and manufacturer. Authors must take responsibility for the integrity of the content generated by these models and tools.

The JAMA Network journals have relevant policies for reporting use of statistical analysis software and recommend that authors follow the EQUATOR Network reporting guidelines,15 including those with guidance for trials that include AI interventions (eg, CONSORT-AI and SPIRIT-AI)21,22 and machine learning in modeling studies (eg, MI-CLAIM).23 The EQUATOR Network has several other reporting guidelines in development for prognostic and diagnostic studies that use AI and machine learning, such as STARD-AI and TRIPOD-AI.24 JAMA Network editors will continue to review and evolve editorial and publication policies in response to these developments with the aim of maintaining the highest standards of transparency and scientific integrity.

Transformative, disruptive technologies, like AI language models, create promise and opportunities as well as risks and threats for all involved in the scientific enterprise. Calls for journals to implement screening for AI-generated content will likely escalate,10 especially for journals that have been targets of paper mills25 and other unscrupulous or fraudulent practices. But with large investments in further development,26 AI tools may be capable of evading any such screens. Regardless, AI technologies have existed for some time, will be further and faster developed, and will continue to be used in all stages of research and the dissemination of information, hopefully with innovative advances that offset any perils. In this era of pervasive misinformation and mistrust, responsible use of AI language models and transparent reporting of how these tools are used in the creation of information and publication are vital to promote and protect the credibility and integrity of medical research and trust in medical knowledge.

Back to top
Article Information

Corresponding Author: Annette Flanagin, RN, MA (annette.flanagin@jamanetwork.org).

Published Online: January 31, 2023. doi:10.1001/jama.2023.1344

Conflict of Interest Disclosures: None reported.

Additional Contributions: We thank Joseph P. Thornton, JD, for reviewing the manuscript, and Amanda Ehrhardt and Kirby Snell for updating the Instructions for Authors for all JAMA Network journals. They all work for the JAMA Network and did not receive additional compensation for their contributions.

De Waard  A. Guest post–AI and scholarly publishing: a view from three experts. Scholarly Kitchen blog. January 18, 2023. Accessed January 25, 2023. https://scholarlykitchen.sspnet.org/2023/01/18/guest-post-ai-and-scholarly-publishing-a-view-from-three-experts/
ChatGPT: Optimizing language models for dialogue. Updated November 30, 2022. Accessed January 25, 2023. https://openai.com/blog/chatgpt/
Johnson  A. Here’s what to know about OpenAI’s ChatGPT—what it’s disrupting and how to use it. Forbes. December 7, 2022. Accessed January 25, 2023. https://www.forbes.com/sites/ariannajohnson/2022/12/07/heres-what-to-know-about-openais-chatgpt-what-its-disrupting-and-how-to-use-it/?sh=15d23ca42643
Mollick  E. ChatGPT is a tipping point for AI. Harvard Business Review. December 14, 2022. Accessed January 25, 2023. https://hbr.org/2022/12/chatgpt-is-a-tipping-point-for-ai
Gilson  A, Safranek  C, Huang  T.  How does ChatGPT perform on the medical licensing exams? the implications of large language models for medical education and knowledge assessment.   medRxiv. Preprint posted December 26, 2022. doi:10.1101/2022.12.23.22283901Google Scholar
Stokel-Walker  C.  ChatGPT listed as author on research papers: many scientists disapprove.   Nature. 2023;613(7945):620-621. doi:10.1038/d41586-023-00107-zPubMedGoogle ScholarCrossref
 Tools such as ChatGPT threaten transparent science; here are our ground rules for their use.   Nature. 2023;613(7945):612. doi:10.1038/d41586-023-00191-1PubMedGoogle ScholarCrossref
Thorp  HH.  ChatGPT is fun, but not an author.   Science. 2023;379(6630):313. doi:10.1126/science.adg7879PubMedGoogle ScholarCrossref
Hosseini  M, Rasmussen  LM, Resnik  DB.  Using AI to write scholarly publications.   Account Res. 2023;1-9. Published online January 25, 2023. doi:10.1080/08989621.2023.2168535PubMedGoogle ScholarCrossref
Zielinski  C, Winker  M, Aggarwal  R,  et al; WAME Board.  Chatbots, ChatGPT, and scholarly manuscripts: WAME recommendations on ChatGPT and chatbots in relation to scholarly publications. January 20, 2023. Accessed January 28, 2023. https://wame.org/page3.php?id=106
Fourth International Conference on Machine Learning. Clarification on large language model policy LLM. Accessed January 26, 2023. https://icml.cc/Conferences/2023/llm-policy
Davis  P. Did ChatGPT just lie to me? Scholarly Kitchen blog. January 13, 2023. Accessed January 25, 2023. https://scholarlykitchen.sspnet.org/2023/01/13/did-chatgpt-just-lie-to-me/
Carpenter  TA. Thoughts on AI’s impact on scholarly communications? an interview with ChatGPT. Scholarly Kitchen blog. January 11, 2023. Accessed January 25, 2023. https://scholarlykitchen.sspnet.org/2023/01/11/chatgpt-thoughts-on-ais-impact-on-scholarly-communications/
Kendrick  CL. Guest post—the efficacy of ChatGPT: is it time for the librarians to go home? Scholarly Kitchen blog. January 26, 2023. Accessed January 26, 2023. https://scholarlykitchen.sspnet.org/2023/01/26/guest-post-the-efficacy-of-chatgpt-is-it-time-for-the-librarians-to-go-home/?informz=1&nbd=411f2c31-57eb-46fb-a55c-93d4b350225a&nbd_source=informz
Instructions for Authors. JAMA. Updated January 30, 2023. Accessed January 30, 2023. https://jamanetwork.com/journals/jama/pages/instructions-for-authors
Hewitt  RM.  Exposition as applied to medicine; a glance at the ethics of it.   J Am Med Assoc. 1954;156(5):477-479. doi:10.1001/jama.1954.02950050017005PubMedGoogle ScholarCrossref
Rennie  D, Flanagin  A.  Authorship! authorship! guests, ghosts, grafters, and the two-sided coin.   JAMA. 1994;271(6):469-471. doi:10.1001/jama.1994.03510300075043PubMedGoogle ScholarCrossref
Authorship responsibility. In: Christiansen S, Iverson C, Flanagin A, et al. AMA Manual of Style: A Guide for Authors and Editors. 11th ed. Oxford University Press; 2020. Updated February 2022. http://www.amamanualofstyle.com
International Committee of Medical Journal Editors. Recommendations for the conduct, reporting, editing, and publication of scholarly work in medical journals. Updated May 2022. Accessed January 25, 2023. https://www.icmje.org/recommendations
Fontanarosa  P, Bauchner  H, Flanagin  A.  Authorship and team science.   JAMA. 2017;318(24):2433-2437. doi:10.1001/jama.2017.19341PubMedGoogle ScholarCrossref
Equator Network. Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: the CONSORT-AI Extension. Equator Network. Updated January 4, 2023. Accessed January 28, 2023. https://www.equator-network.org/reporting-guidelines/consort-artificial-intelligence/
Equator Network. Guidelines for clinical trial protocols for interventions involving artificial intelligence: the SPIRIT-AI Extension. Equator Network. Updated January 4, 2023. Accessed January 28, 2023. https://www.equator-network.org/reporting-guidelines/spirit-artificial-intelligence/
Equator Network. Minimum information about clinical artificial intelligence modeling: the MI-CLAIM checklist. Updated October 2, 2020. Accessed January 28, 2022. https://www.equator-network.org/reporting-guidelines/minimum-information-about-clinical-artificial-intelligence-modeling-the-mi-claim-checklist/
Equator Network. Reporting guidelines under development for other study designs. Updated January 19, 2023. Accessed January 28, 2023. https://www.equator-network.org/library/reporting-guidelines-under-development/reporting-guidelines-under-development-for-other-study-designs/#AIMOD
Perron  BE, Hertz-Perron  OT, Victor  BG. Revealed: the inner workings of a paper mill. Retraction Watch. December 20, 2021. https://retractionwatch.com/2021/12/20/revealed-the-inner-workings-of-a-paper-mill/
Metz  C, Weise  K. Microsoft to invest $10 billion in OpenAI, the creator of ChatGPT. The New York Times. January 23, 2023. Accessed January 25, 2023. https://www.nytimes.com/2023/01/23/business/microsoft-chatgpt-artificial-intelligence.html?searchResultPosition=3