WAME Recommendations

WAME Recommendations on Chatbots and Generative Artificial Intelligence in Relation to Scholarly Publication (https://wame.org/page3.php?id=110)

WAME Recommendation 1: Chatbots cannot be authors. Journals have begun to publish articles in which chatbots such as Bard, Bing and ChatGPT have been used, with some journals listing chatbots as co-authors. The legal status of an author differs from country to country but under most jurisdictions, an author must be a legal person. Chatbots do not meet the International Committee of Medical Journal Editors (ICMJE) authorship criteria, particularly that of being able to give “final approval of the version to be published” and “to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.” (10) No AI tool can “understand” a conflict-of-interest statement, and does not have the legal standing to sign a statement. Chatbots have no affiliation independent of their developers. Since authors submitting a manuscript must ensure that all those named as authors meet the authorship criteria, chatbots cannot be included as authors.

WAME Recommendation 2: Authors should be transparent when chatbots are used and provide information about how they were used. The extent and type of use of chatbots in journal publications should be indicated. This is consistent with the ICMJE recommendation of acknowledging writing assistance (11) and providing in the Methods detailed information about how the study was conducted and the results generated. (12)

WAME Recommendations 2.1: Authors submitting a paper in which a chatbot/AI was used to draft new text should note such use in the acknowledgment; all prompts used to generate new text, or to convert text or text prompts into tables or illustrations, should be specified.

WAME Recommendation 2.2: When an AI tool such as a chatbot is used to carry out or generate analytical work, help report results (e.g., generating tables or figures), or write computer codes, this should be stated in the body of the paper, in both the Abstract and the Methods section. In the interests of enabling scientific scrutiny, including replication and identifying falsification, the full prompt used to generate the research results, the time and date of query, and the AI tool used and its version, should be provided.

WAME Recommendation 3: Authors are responsible for material provided by a chatbot in their paper (including the accuracy of what is presented and the absence of plagiarism) and for appropriate attribution of all sources (including original sources for material generated by the chatbot). Authors of articles written with the help of a chatbot are responsible for the material generated by the chatbot, including its accuracy. Noting that plagiarism is “the practice of taking someone else's work or ideas and passing them off as one's own” (13), not just the verbatim repetition of previously published text. It is the author’s responsibility to ensure that the content reflects the author's data and ideas and is not plagiarism, fabrication or falsification. Otherwise, it is potentially scientific misconduct to offer such material for publication, irrespective of how it was written. Similarly, authors must ensure that all quoted material is appropriately attributed, including full citations, and that the cited sources support the chatbot’s statements. Since a chatbot may be designed to omit sources that oppose viewpoints expressed in its output, it is the authors’ responsibility to find, review and include such counterviews in their articles. (Of course, such biases are also found in human authors.) Authors should identify the chatbot used and the specific prompt (query statement) used with the chatbot. They should specify what they have done to mitigate the risk of plagiarism, provide a balanced view, and ensure the accuracy of all their references.

WAME Recommendation 4: Editors and peer reviewers should specify, to authors and each other, any use of chatbots in the evaluation of the manuscript and generation of reviews and correspondence. If they use chatbots in their communications with authors and each other, they should explain how they were used. Editors and reviewers are responsible for any content and citations generated by a chatbot. They should be aware that chatbots retain the prompts fed to them, including manuscript content, and supplying an author's manuscript to a chatbot breaches confidentiality of the submitted manuscript.

WAME Recommendation 5: Editors need appropriate tools to help them detect content generated or altered by AI. Such tools should be made available to editors regardless of ability to pay for them, for the good of science and the public, and to help ensure the integrity of healthcare information and reducing the risk of adverse health outcomes. Many medical journal editors use manuscript evaluation approaches that were not designed to deal with AI innovations and industries, including manipulated plagiarized text and images and papermill-generated documents. They have already been at a disadvantage when trying to differentiate the legitimate from the fabricated, and chatbots take this challenge to a new level. Editors need access to tools that will help them evaluate content efficiently and accurately. This is of particular importance to editors of medical journals where the adverse consequences of misinformation include potential harms to people.