Letter to the Editor
Comment in: http://www.jmir.org/2023/1/e50844/
Májovský and colleagues’  concern regarding OpenAI’s ChatGPT is valid. Seven months ago, the release of ChatGPT was quickly tempered by warnings of perpetuating biases and spreading misinformation. Artificial intelligence (AI) tools threaten to amplify preexisting issues in academic publishing, particularly the scientific peer review process. This outdated system is overwhelmed by the volume of new journals and papers produced by a growing global academic community [ ], a problem that AI is fit to accentuate. Here are three areas in need of modification:
The scientific peer review process
- The body of qualified reviewers is drowning in a rising sea of writers that includes top researchers, undergraduate students, and academics at all levels in between [ ].
- Peer review lacks formal standards or guidelines, as well as training, particularly in statistics [ ], creating a restricted and top-heavy pool of qualified reviewers [ ].
- Reviewers are declining to perform reviews more often since the notion of reviewing as a professional obligation fails to sufficiently recognize or reward the burden it imposes [ ].
- Peer review fraud, which involves conflicts of interest, influence, and false identities, has evolved out of a need for reviews.
- The dearth of reviewers is compounded by the proliferation of plagiarized, fraudulent, and otherwise low-quality work [ ].
- Pressure to “publish or perish” has led to high-profile cases of academic fraud and likewise feeds “paper mills” that churn out questionable research for academics who are desperate to progress in their careers [ ].
- The proliferation of “for-profit” journals subverts respectful publishing through financialization that exploits and alienates scientists [ ].
- Májovský et al [ ] displayed the effectiveness of ChatGPT as an open access ghostwriter, capable of fabricating a complete and convincing article in just one hour.
- In one study, only 63% of ChatGPT-generated abstracts were caught by reviewers as fakes [ ]. In response to such findings, Science is updating its license and editorial policies to prohibit AI-generated text, figures, or graphics [ ].
- Such staunch resistance is misguided; AI may not be an author per se, but its utility in all stages of research, from generating topics and compiling information to writing text, cannot be ignored. If an AI-generated, human-reviewed paper communicates quality research, why should it be disallowed? Moreover, how would we tell?
- Although AI-generated text detection software can help [ ], detection bypass tools are similarly available online.
AI makes the need for high-quality peer reviews greater and more pressing than ever before. The cornerstone of scientific integrity is on the path to obsoletion without a viable successor. As academic pursuits become increasingly inseparable from industry, conceptualizing peer review as a duty to science will no longer suffice. Respecting and empowering the peer review system will involve considering reviewers as expert consultants, performing reviews as productive work, and creating system-wide guidelines that integrate (rather than resist) AI technologies. This problem, emerging from an imperative for success, needs a peer review system and publication process that has more teeth than trust, a commodity that served us well in the past, but whose restoration bears reinvention.
Conflicts of Interest
- Májovský M, Černý M, Kasal M, Komarc M, Netuka D. Artificial intelligence can generate fraudulent but authentic-looking scientific medical articles: Pandora’s box has been opened. J Med Internet Res 2023 May 31;25:e46924 [https://www.jmir.org/2023//e46924/] [CrossRef] [Medline]
- Dance A. Stop the peer-review treadmill. I want to get off. Nature 2023 Feb 13;614(7948):581-583 [CrossRef] [Medline]
- Ballester PL. Open science and software assistance: commentary on “Artificial intelligence can generate fraudulent but authentic-looking scientific medical articles: Pandora’s box has been opened”. J Med Internet Res 2023 May 31;25:e49323 [https://www.jmir.org/2023//e49323/] [CrossRef] [Medline]
- Else H, Van Noorden R. The fight against fake-paper factories that churn out sham science. Nature 2021 Mar 23;591(7851):516-519 [CrossRef] [Medline]
- Thorp HH. ChatGPT is fun, but not an author. Science 2023 Jan 27;379(6630):313-313 [CrossRef] [Medline]
|AI: artificial intelligence|
Edited by T Leung; This is a non–peer-reviewed article. submitted 06.07.23; accepted 12.08.23; published 31.08.23Copyright
©Nicholas Liu, Amy Brown. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 31.08.2023.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.