The Use of Large Language Models and Their Association With Enhanced Impact in Biomedical Research and Beyond

Huzi Cheng , Wen Shi , Bin Sheng , Aaron Y. Lee , Josip Car , Varun Chaudhary , Atanas G. Atanasov , Nan Liu , Yue Qiu , Qingyu Chen , Tien Yin Wong , Yih-Chung Tham , Ying-Feng Zheng

MEDCOMM - Future Medicine ›› 2025, Vol. 4 ›› Issue (2) : e70019

PDF
MEDCOMM - Future Medicine ›› 2025, Vol. 4 ›› Issue (2) : e70019 DOI: 10.1002/mef2.70019
ORIGINAL ARTICLE

The Use of Large Language Models and Their Association With Enhanced Impact in Biomedical Research and Beyond

Author information +
History +
PDF

Abstract

The release of ChatGPT in 2022 has catalyzed the adoption of large language models (LLMs) across diverse writing domains, including academic writing. However, this technological shift has raised critical questions regarding the prevalence of LLM usage in academic writing and its potential influence on the quality and impact of research articles. Here, we address these questions by analyzing preprint articles from arXiv, bioRxiv, and medRxiv between 2022 and 2024, employing a novel LLM usage detection tool. Our study reveals that LLMs have been widely adopted in biomedical and other types of academic writing since late 2022. Notably, we noticed that LLM usage is linked to an enhanced impact of research articles after examining their correlation, as measured by citation numbers. Furthermore, we observe that LLMs influence specific content types in academic writing, including hypothesis formulation, conclusion summarization, description of phenomena, and suggestions for future work. Collectively, our findings underscore the potential benefits of LLMs in scientific communication, suggesting that they may not only streamline the writing process but also enhance the dissemination and impact of research findings across disciplines.

Keywords

academic writing / chatGPT / large language models / LLM

Cite this article

Download citation ▾
Huzi Cheng, Wen Shi, Bin Sheng, Aaron Y. Lee, Josip Car, Varun Chaudhary, Atanas G. Atanasov, Nan Liu, Yue Qiu, Qingyu Chen, Tien Yin Wong, Yih-Chung Tham, Ying-Feng Zheng. The Use of Large Language Models and Their Association With Enhanced Impact in Biomedical Research and Beyond. MEDCOMM - Future Medicine, 2025, 4(2): e70019 DOI:10.1002/mef2.70019

登录浏览全文

4963

注册一个新账户 忘记密码

References

[1]

E. Brynjolfsson, D. Li, and L. R. Raymond, Generative AI at Work (National Bureau of Economic Research, 2023).

[2]

T. Brown, B. Mann, N. Ryder, et al., “Language Models Are Few-Shot Learners,” Advances in Neural Information Processing Systems 33 (2020): 1877–1901.

[3]

P. Cardon, C. Fleischmann, J. Aritz, M. Logemann, and J. Heidewald, “The Challenges and Opportunities of AI-Assisted Writing: Developing AI Literacy for the AI Age,” Business and Professional Communication Quarterly 86, no. 3 (2023): 257–295.

[4]

H. A. McKee and J. E. Porter, “Ethics for AI Writing: The Importance of Rhetorical Context,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (2020), 110–116.

[5]

M. Salvagno, F. S. Taccone, and A. G. Gerli, “Can Artificial Intelligence Help for Scientific Writing?,” Critical Care 27, no. 1 (2023): 75.

[6]

M. Jakesch, M. French, X. Ma, J. T. Hancock, and M. Naaman, “AI-Mediated Communication: How the Perception That Profile Text Was Written by AI Affects Trustworthiness,” in Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (2019), 1–13.

[7]

I. Dergaa, K. Chamari, P. Zmijewski, and H. Ben Saad, “From Human Writing to Artificial Intelligence Generated Text: Examining the Prospects and Potential Threats of ChatGPT in Academic Writing,” Biology of Sport 40, no. 2 (2023): 615–622.

[8]

W. Liang, Z. Izzo, Y. Zhang, et al., “Monitoring AI-Modified Content at Scale: A Case Study on the Impact of ChatGPT on AI Conference Peer Reviews,” arXiv preprint arXiv:240307183, Published Online 2024, https://arxiv.org/abs/1803.09574.

[9]

A. Hans, A. Schwarzschild, V. Cherepanova, et al., “Spotting LLMs With Binoculars: Zero-Shot Detection of Machine-Generated Text,” Published Online January 22, 2024, http://arxiv.org/abs/2401.12070.

[10]

S. V. Nuti, B. Wayda, I. Ranasinghe, et al., “The Use of Google Trends in Health Care Research: A Systematic Review,” PLoS One 9, no. 10 (2014): e109583.

[11]

G. Bellec, D. Salaj, A. Subramoney, R. Legenstein, and W. Maass, “Long Short-Term Memory and Learning-to-Learn in Networks of Spiking Neurons,” https://arxiv.org/abs/1803.09574.

[12]

K. Cho, B. Van Merriënboer, C. Gulcehre, et al., “Learning Phrase Representations Using RNN Encoder-Decoder for Statistical Machine Translation,” arXiv preprint arXiv:14061078, Published Online 2014, https://arxiv.org/abs/1406.1078.

[13]

X. Yang, W. Cheng, L. Petzold, W. Y. Wang, and H. Chen, “DNA-GPT: Divergent N-Gram Analysis for Training-free Detection of GPT-Generated Text,” arXiv preprint arXiv:230517359, Published Online 2023, https://arxiv.org/abs/2305.17359.

[14]

A. Gray, “ChatGPT Contamination”: Estimating the Prevalence of LLMs in the Scholarly Literature,” arXiv preprint arXiv:240316887, Published Online 2024, https://arxiv.org/abs/2403.16887.

[15]

M. Dhaini, W. Poelman, and E. Erdogan, “Detecting ChatGPT: A Survey of the State of Detecting ChatGPT-generated Text,” arXiv preprint arXiv:230907689, Published Online 2023, https://arxiv.org/abs/2309.07689.

[16]

S. S. Ghosal, S. Chakraborty, J. Geiping, F. Huang, D. Manocha, and A. S. Bedi, “Towards Possibilities & Impossibilities of AI-Generated Text Detection: A Survey,” arXiv preprint arXiv:231015264, Published Online 2023, https://arxiv.org/abs/2310.15264.

[17]

R. Tang, Y. N. Chuang, and X. Hu, “The Science of Detecting LLM-Generated Texts,” arXiv preprint arXiv:230307205, Published Online 2023, https://arxiv.org/abs/2303.07205.

[18]

V. Verma, E. Fleisig, N. Tomlin, and D. Klein, “Ghostbuster: Detecting Text Ghostwritten by Large Language Models,” Published Online November 13, 2023, http://arxiv.org/abs/2305.15047.

[19]

E. Mitchell, Y. Lee, A. Khazatsky, C. D. Manning, and C. Finn, “Detectgpt: Zero-Shot Machine-Generated Text Detection Using Probability Curvature.” International Conference on Machine Learning (PMLR, 2023), 24950–24962.

[20]

V. S. Sadasivan, A. Kumar, S. Balasubramanian, W. Wang, and S. Feizi, “Can AI-Generated Text Be Reliably Detected?” arXiv preprint arXiv:230311156, Published Online 2023.

[21]

H. Alkaissi and S. I. McFarlane, “Artificial Hallucinations in Chatgpt: Implications in Scientific Writing,” Cureus 15, no. 2 (2023): e35179.

[22]

H. Ye, T. Liu, A. Zhang, W. Hua, and W. Jia, “Cognitive Mirage: A Review of Hallucinations in Large Language Models,” Published Online September 13, 2023, https://doi.org/10.48550/arXiv.2309.06794.

[23]

H. Piwowar, J. Priem, V. Larivière, et al., “The State of OA: A Large-Scale Analysis of the Prevalence and Impact of Open Access Articles,” PeerJ 6 (2018): e4375.

[24]

E. Almazrouei, H. Alobeidli, A. Alshamsi, et al., “Falcon-40B: An Open Large Language Model With State-of-the-Art Performance,” Findings of the Association for Computational Linguistics: ACL 2023 (2023): 10755–10773.

[25]

M. Lewis, Y. Liu, N. Goyal, et al., “Bart: Denoising Sequence-to-sequence Pre-training for Natural Language Generation, Translation, and Comprehension,” arXiv preprint arXiv:191013461, Published Online 2019, https://arxiv.org/abs/1910.13461.

[26]

W. Yin, J. Hay, and D. Roth, “Benchmarking Zero-Shot Text Classification: Datasets, Evaluation and Entailment Approach,” arXiv preprint arXiv:190900161, Published Online 2019, https://arxiv.org/abs/1909.00161.

RIGHTS & PERMISSIONS

2025 The Author(s). MedComm - Future Medicine published by John Wiley & Sons Australia, Ltd on behalf of Sichuan International Medical Exchange & Promotion Association (SCIMEA).

AI Summary AI Mindmap
PDF

23

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/