Suspected undeclared use of generative artificial intelligence
Alex Glynn
Suspected undeclared use of generative artificial intelligence
In a recent article in Intelligent Pharmacy, a portion of the text appears to have been generated by a generative artificial intelligence (AI) system. The usage of AI is not documented in the article. If AI was used, therefore, the article is in violation of the journal’s policy on generative AI use and declaration.
Generative artificial intelligence / Transparency / Accountability
[1] |
Verma S, Tiwari RK, Singh L. Integrating technology and trust: trailblazing role of AI in reframing pharmaceutical digital outreach. Int Pharmacop. 2024;15. https://doi.org/10.1016/j.ipha.2024.01.005. Published online January.
|
[2] |
Maiberg E. Scientific journals are publishing papers with AI-generated text. 404 media. Published March 18, 2024 https://www.404media.co/scientific-journals-are-publishing-papers-with-ai-generated-text/. Accessed March 22, 2024.
|
[3] |
Open AI, Achiam J, Adler S, et al. GPT-4 Technical Report. arXiv:2303.08774. doi:10.48550/arXiv.2303.08774.
|
[4] |
Author Instructions for Preparation and Submission of an Article to Intelligent Pharmacy. KeAi Publishing;2024. Published https://www.keaipublishing.com/en/journals/intelligent-pharmacy/guide-for-authors/. Accessed March 25, 2024.
|
[5] |
Recommendations for the Conduct. Reporting, editing, and publication of scholarly work in medical journals. Published online January https://www.icmje.org/icmjerecommendations.pdf; 2024. Accessed March 19, 2024.
|
[6] |
Zielinski C, Winker MA, Aggarwal R, et al. Chatbots, generative AI, and scholarly manuscripts: WAME recommendations on chatbots and generative artificial intelligence. In: Relation to Scholarly Publications. World Association of Medical;2023. Editors. Published May 31 https://wame.org/page3.php?id=106.
CrossRef
Google scholar
|
[7] |
Authorship and AI Tools. Committee On Publication Ethics;2023. Published February 13 https://publicationethics.org/cope-position-statements/ai-author. Accessed March 21, 2024.
|
[8] |
Submission and Peer Review Policies. IEEE Author Center;2024. Published https://journals.ieeeauthorcenter.ieee.org/become-an-ieee-journal-author/publishing-ethics/guidelines-and-policies/submission-and-peer-review-policies/. Accessed March 21, 2024.
|
[9] |
The Use of AI and AI-Assisted Technologies in Writing for. Elsevier;2024. www.elsevier.com. Published https://www.elsevier.com/about/policies-and-standards/the-use-of-generative-ai-and-ai-assisted-technologies-in-writing-for-elsevier. Accessed March 25, 2024.
|
[10] |
Science journals: editorial policies. Published https://www.science.org/content/page/science-journals-editorial-policies;2024. Accessed March 21, 2024.
|
[11] |
Defining Authorship in Your Research Paper. Taylor and Francis;2024. Published https://authorservices.taylorandfrancis.com/editorial-policies/defining-authorship-research-paper. Accessed March 21, 2024.
|
[12] |
Artificial Intelligence (AI). Springer;2023. Published https://www.springer.com/us/editorial-policies/artificial-intelligence-ai-/25428500. Accessed March 21, 2024.
|
[13] |
Best Practice Guidelines on Research Integrity and Publishing Ethics. Wiley Author Services;2023. Published February 28 https://authorservices.wiley.com/ethics-guidelines/index.html.
|
[14] |
Athaluri SA, Manthena SV, Kesapragada VSRKM, Yarlagadda V, Dave T, Duddumpudi RTS. Exploring the boundaries of reality: investigating the phenomenon of artificial intelligence hallucination in scientific writing through ChatGPT references. Cureus. 2023. Published online April 11.
CrossRef
Google scholar
|
[15] |
Bhattacharyya M, Miller VM, Bhattacharyya D, Miller LE. High rates of fabricated and inaccurate references in ChatGPT-generated medical content. Cureus. 2023. Published online May 19.
CrossRef
Google scholar
|
[16] |
Chen A, Chen DO. Accuracy of chatbots in citing journal articles. JAMA Netw Open. 2023;6(8):e2327647.
CrossRef
Google scholar
|
[17] |
Gravel J, D’Amours-Gravel M, Osmanlliu E. Learning to fake it: limited responses and fabricated references provided by ChatGPT for medical questions. Mayo Clin Proc Digit Health. 2023;1(3):226–234.
CrossRef
Google scholar
|
[18] |
Hueber AJ, Kleyer A. Quality of citation data using the natural language processing tool ChatGPT in rheumatology: creation of false references. RMD Open. 2023;9(2): e003248.
CrossRef
Google scholar
|
[19] |
Moskatel LS, Zhang N. Comparative prevalence and characteristics of fabricated citations in large language models in headache medicine. Headache. 2024;64(1):93–95.
CrossRef
Google scholar
|
[20] |
Wagner MW, Ertl-Wagner BB. Accuracy of information and references using ChatGPT-3 for retrieval of clinical radiological information. Can Assoc Radiol J. 2024;75(1):69–73.
CrossRef
Google scholar
|
[21] |
Hosseini M, Rasmussen LM, Resnik DB. Using AI to write scholarly publications. Account Res. 2023;0(0):1–9.
CrossRef
Google scholar
|
[22] |
Sackett DL, Rosenberg WMC, Gray JAM, Haynes RB, Richardson WS. Evidence based medicine: what it is and what it isn’t. BMJ. 1996;312(7023):71–72.
CrossRef
Google scholar
|
[23] |
von Eschenbach WJ. Transparency and the black box problem: why we do not trust AI. Philos Technol. 2021;34(4):1607–1622.
CrossRef
Google scholar
|
/
〈 | 〉 |