Editor's Note
Leading medical journals vary significantly in guidance addressing the use artificial intelligence (AI) in medical research, according to an analysis published December 3 in JAMA Network Open.
The study categorized journals’ attitudes toward AI-assisted peer review into three groups: prohibition, limited use with conditions, and lack of explicit guidance. Overall, 78% of the journals in a 2024 analysis provide explicit guidance on AI use in peer review, researchers write. Of that total, 59% explicitly prohibit AI use, while 41% allow limited use under conditions such as confidentiality and respect for authorship rights.
As for specific restrictions, 91% of journals with AI guidance prohibited uploading manuscript-related content to AI platforms. 32% required reviewers to disclose AI use in their reports.
The analysis focused largely on Chatbots and large language models, which were explicitly referenced by 47% and 27% of journals, respectively. Publishers like Wiley and Springer Nature favored restricted AI use, while Elsevier and Cell Press outright prohibited it. Internationally, journals based in the US or Europe were less likely to permit limited AI use than those in other regions.
Confidentiality was the primary reason for prohibiting or limiting AI use. Others include bias, misuse, and lack of reviewer expertise in managing AI tools.
Read More >>