Large Language Models for Transforming Healthcare: A Perspective on DeepSeek-R1

Jinsong Zhou , Yuhan Cheng , Sixu He , Yingcong Chen , Hao Chen

MEDCOMM - Future Medicine ›› 2025, Vol. 4 ›› Issue (2) : e70021

PDF
MEDCOMM - Future Medicine ›› 2025, Vol. 4 ›› Issue (2) : e70021 DOI: 10.1002/mef2.70021
PERSPECTIVE

Large Language Models for Transforming Healthcare: A Perspective on DeepSeek-R1

Author information +
History +
PDF

Abstract

DeepSeek-R1 is an open-source Large Language Model (LLM) with advanced reasoning capabilities. It has gained significant attention for its impressive advantages including low costs and visualized reasoning steps. Recent advancements in reasoning LLMs like ChatGPT-o1 have significantly exhibited their considerable reasoning potential, but the closed-source nature of existing models limits customization and transparency, presenting substantial barriers to their integration into healthcare systems. This gap motivates the exploration of DeepSeek-R1 in the medical field. Thus, we comprehensively review the transformative potential, applications, and challenges of DeepSeek-R1 in healthcare. Specifically, we investigate how DeepSeek-R1 can enhance clinical decision support, patient engagement, and medical education to help for clinic, outpatient and medical research. Furthermore, we critically evaluate challenges including modality limitations (text-only), hallucination risks, and ethical issues, particularly related to patient autonomy and safety-focused recommendations. By assessing DeepSeek-R1′s integration potential, this perspective highlights promising opportunities for advancing medical AI while emphasizing necessary improvements to maximize clinical reliability and ethical compliance. This paper provides valuable guidance for future research directions and elucidates practical application scenarios for DeepSeek-R1′s successful integration into healthcare settings.

Keywords

AI for healthcare / AI interpretability / data privacy / DeepSeek-R1 / diagnosis and treatment / ethics / hallucination / medical education / patient engagement and adherence

Cite this article

Download citation ▾
Jinsong Zhou, Yuhan Cheng, Sixu He, Yingcong Chen, Hao Chen. Large Language Models for Transforming Healthcare: A Perspective on DeepSeek-R1. MEDCOMM - Future Medicine, 2025, 4(2): e70021 DOI:10.1002/mef2.70021

登录浏览全文

4963

注册一个新账户 忘记密码

References

[1]

S. Bubeck, V. Chandrasekaran, R. Eldan, et al., “Sparks of Artificial General Intelligence: Early Experiments With GPT-4,” arXiv (Cornell University), 2023: 5, https://doi.org/10.48550/arxiv.2303.12712.

[2]

M. Moor, O. Banerjee, Z. S. H. Abad, et al., “Foundation Models for Generalist Medical Artificial Intelligence,” Nature 616, no. 7956 (2023): 259–265.

[3]

Y. Ge, W. Hua, K. Mei, et al., “Openagi: When llm Meets Domain Experts,” Advances in Neural Information Processing Systems 36 (2023): 5539–5568.

[4]

M. Sallam, K. Al-Mahzoum, M. Sallam, and M. M. Mijwil, “DeepSeek: Is It the End of Generative AI Monopoly or the Mark of the Impending Doomsday?,” Mesopotamian Journal of Big Data 2025 (2025): 26–34.

[5]

D. Normile, “Chinese Firm's Faster, Cheaper AI Language Model Makes a Splash,” Scienceorg. January 15, 2025, https://www.science.org/content/article/chinese-firm-s-faster-cheaper-ai-language-model-makes-splash.

[6]

E. Gibney, “China's Cheap, Open AI Model Deepseek Thrills Scientists,” Nature 638, no. 8049 (2025): 13–14.

[7]

X. Fang, Y. Lin, D. Zhang, et al., “Aligning Medical Images With General Knowledge From Large Language Models,” in International Conference on Medical Image Computing and Computer-Assisted Intervention eds. M. G. Linguraru, Q. Dou, A. Feragen, S. Giannarou, B. Glocker, K. Lekadir, and J. A. Schnabel (Springer Nature Switzerland, 2024), 57–67.

[8]

D. Guo, D. Yang, H. Zhang, et al., “DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning,” arXiv (Cornell University), 2025, https://doi.org/10.48550/arxiv.2501.12948.

[9]

S. Nanfang, “Southern Hospital Completes Localized Deployment and In-Depth Application of Deepseek, Achieving Comprehensive Improvement in Quality and Efficiency of Medical and Health Services,” Hospital Medical University, China, February 24, 2025, https://news.smu.edu.cn/info/1012/109476.htm.

[10]

The Paper, “The First in China! A Health Center in Dong Sheng Community Adopts DeepSeek,” February 28, 2025, https://www.thepaper.cn/newsDetail_forward_30267669.

[11]

Y. Wu, Y. Liu, Y. Yang, et al., “A Concept-Based Interpretable Model for the Diagnosis of Choroid Neoplasias Using Multimodal Data,” Nature Communications 16, no. 1 (2025): 3504.

[12]

D. Q. Wang, L. Y. Feng, J. G. Ye, J. G. Zou, and Y. F. Zheng, “Accelerating the Integration of ChatGPT and Other Large-Scale AI Models Into Biomedical Research and Healthcare,” MedComm – Future Medicine 2 (2023): e43.

[13]

T. Tu, M. Schaekermann, A. Palepu, et al., “Towards Conversational Diagnostic Artificial Intelligence,” Nature (2025): 1–9, https://doi.org/10.1038/s41586-025-08866-7.

[14]

“Medical Large Language Model for Diagnostic Reasoning Across Specialties,” Nature Medicine 31 (2025): 743–744.

[15]

F. Wang, S. Li, Y. Gao, and S. Li, “Computed Tomography-Based Artificial Intelligence in Lung Disease—Chronic Obstructive Pulmonary Disease,” MedComm – Future Medicine 3 (2024): e73.

[16]

The University of Hong Kong—Shenzhen Hospital, “HKU-Shenzhen Hospital Successfully Deploys AI Foundation Model Locally to Boost Smart Healthcare Development,” 2025, https://www.hku-szh.org/xgdxszyy/gjylzx/gjylzxdt/content/post_1532680.html.

[17]

S. Wang, Z. Zhao, X. Ouyang, T. Liu, Q. Wang, and D. Shen, “Interactive Computer-Aided Diagnosis on Medical Image Using Large Language Models,” Communications Engineering 3, no. 1 (2024): 133.

[18]

Z. Zhao, S. Wang, J. Gu, et al., “ChatCAD+: Towards a Universal and Reliable Interactive CAD Using LLMs,” IEEE Transactions on Medical Imaging 43, no. 11 (2014): 3755–3766.

[19]

X. Li, X. Hou, Z. Huang, et al., “ProMRVL-CAD: Proactive Dialogue System With Multi-Round Vision-Language Interactions for Computer-Aided Diagnosis,” arXiv preprint, 2025, https://doi.org/10.48550/arxiv.2502.10620.

[20]

T. I. Wilhelm, J. Roos, and R. Kaczmarczyk, “Large Language Models for Therapy Recommendations Across 3 Clinical Specialties: Comparative Study,” Journal of Medical Internet Research 25 (2023): e49324.

[21]

Z. Wang, Y. Zhu, J. Gao, et al., “RetCare: Towards Interpretable Clinical Decision Making Through LLM-Driven Medical Knowledge Retrieval,” KDD'24 Workshop: Artificial Intelligence and Data Science for Healthcare: Bridging Data-Centric AI and People-Centric Healthcare, 2024.

[22]

T. Savage, A. Nayak, R. Gallo, E. Rangan, and J. H. Chen, “Diagnostic Reasoning Prompts Reveal the Potential for Large Language Model Interpretability in Medicine,” NPJ Digital Medicine 7, no. 1 (2024): 20.

[23]

P. Wan, Z. Huang, W. Tang, et al., “Outpatient Reception Via Collaboration Between Nurses and a Large Language Model: A Randomized Controlled Trial,” Nature Medicine 30, no. 10 (2024): 2878–2885.

[24]

Public Hygiene and Health Commission of Shenzhen Municipality, “Pioneering the “AI Hospital” Initiative! This Hospital in Shenzhen Leads the Way With Localized Deployment of Deepseek,” February 10, 2025, 2025, https://wjw.sz.gov.cn/wzx/content/post_11993030.html.

[25]

E. Jain, S. Gupta, V. Yadav, and S. Kachnowski, “Assessing the Usability and Effectiveness of an AI-Powered Telehealth Platform: Mixed Methods Study on the Perspectives of Patients and Providers,” JMIR Formative Research 8 (2024): e62742.

[26]

D. V. Rodriguez, J. Chen, R. V. N. Viswanadham, K. Lawrence, and D. Mann, “Leveraging Machine Learning to Develop Digital Engagement Phenotypes of Users in a Digital Diabetes Prevention Program: Evaluation Study,” Jmir Ai 3 (2024): e47122.

[27]

T. Zhong, W. Zhao, Y. Zhang, et al., “Chatradio-Valuer: A Chat Large Language Model for Generalizable Radiology Report Generation Based on Multi-Institution and Multi-System Data,” arXiv preprint. 2023, https://doi.org/10.48550/arXiv.2310.05242.

[28]

H. Jin, H. Che, Y. Lin, and H. Chen, “Promptmrg: Diagnosis-Driven Prompts for Medical Report Generation,” Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 3 (2024): 2607–2615.

[29]

J. Lovon, M. Mouysset, J. Oleiwan, et al., “Evaluating LLM Abilities to Understand Tabular Electronic Health Records: A Comprehensive Study of Patient Data Extraction and Retrieval,” arXiv preprint. 2025, https://doi.org/10.48550/arXiv.2501.09384.

[30]

S. Hegselmann, G. von Arnim, T. Rheude, et al., “Large Language Models are Powerful EHR Encoders,” arXiv preprint. 2025, https://doi.org/10.48550/arXiv.2502.17403.

[31]

Q. Liu, X. Wu, X. Zhao, et al., “Large Language Model Distilling Medication Recommendation Model,” arXiv preprint, 2024, https://doi.org/10.48550/arxiv.2402.02803.

[32]

D. Gautam and P. Kellmeyer, “Exploring the Credibility of Large Language Models for Mental Health Support: Protocol for a Scoping Review,” JMIR Research Protocols 14, no. 1 (2025): e62865.

[33]

Z. Guo, A. Lai, J. H. Thygesen, J. Farrington, T. Keen, and K. Li, “Large Language Models for Mental Health Applications: Systematic Review,” JMIR Mental Health 11, no. 1 (2024): e57400.

[34]

I. Roselló Atanet, M. Tomova, M. Sieg, V. Sehy, P. Mäder, and M. März, “Generating Learning Guides for Medical Education With LLMs and Statistical Analysis of Test Results,” BMC Medical Education 25, no. 1 (2025): 458.

[35]

Qilu Hospital of Shandong University, “Integrates Qilu Clinical Skills Foundation Model With DeepSeek, Creating a New ‘Cloud + AI’ Paradigm for Medical Education,” February 14, 2025, https://www.qiluhospital.com/show-26-39475-1.html.

[36]

Y. Shi, S. Xu, T. Yang, et al., “MKRAG: Medical Knowledge Retrieval Augmented Generation for Medical Question Answering,” arXiv preprint, 2023, https://doi.org/10.48550/arxiv.2309.16035.

[37]

J. Li, X. Wang, X. Wu, et al., “Huatuo-26M, a Large-Scale Chinese Medical QA Dataset,” arXiv preprint, 2023, https://doi.org/10.48550/arxiv.2305.01526.

[38]

M. Li, H. Kilicoglu, H. Xu, and R. Zhang, “Biomedrag: A Retrieval Augmented Large Language Model for Biomedicine,” Journal of Biomedical Informatics 162 (2025): 104769.

[39]

S. S. Manathunga and Y. A. Illangasekara, “Retrieval Augmented Generation and Representative Vector Summarization For Large Unstructured Textual Data in Medical Education,” arXiv preprint, 2023, https://doi.org/10.48550/arXiv.2308.00479.

[40]

Z. Ji, N. Lee, R. Frieske, et al., “Survey of Hallucination in Natural Language Generation,” ACM Computing Surveys 55, no. 12 (2023): 1–38.

[41]

S. Farquhar, J. Kossen, L. Kuhn, and Y. Gal, “Detecting Hallucinations in Large Language Models Using Semantic Entropy,” Nature 630, no. 8017 (2024): 625–630.

[42]

M. Chelli, J. Descamps, V. Lavoué, et al., “Hallucination Rates and Reference Accuracy of ChatGPT and Bard for Systematic Reviews: Comparative Analysis,” Journal of Medical Internet Research 26 (2024): e53164.

[43]

H. Huang, O. Zheng, D. Wang, et al., “ChatGPT for Shaping the Future of Dentistry: The Potential Multi-Modal Large Language Model,” International Journal of Oral Science 15, no. 1 (2023): 29.

[44]

Z. Liang, Y. Xu, Y. Hong, et al, “A Survey of Multimodel Large Language Models,” In Proceedings of the 3rd International Conference on Computer, Artificial Intelligence and Control Engineering (Association for Computing Machinery, 2024): 405–409, https://dl.acm.org/doi/10.1145/3672758.3672824.

[45]

Hallucination Leaderboard, “GitHub,” December 2, 2023, https://github.com/vectara/hallucination-leaderboard.

[46]

R. Gillon, “‘Primum Non Nocere’ and the Principle of Non-Maleficence,” BMJ 291, no. 6488 (1985): 130–131.

[47]

R. Gillon, “Beneficence: Doing Good for Others,” BMJ 291, no. 6487 (1985): 44–45.

[48]

J. Wu, “The Rise of Deepseek: Technology Calls for the ‘Catfish Effect’,” Journal of Thoracic Disease 17, no. 2 (2025): 1106–1108.

[49]

J. Pan, C. Liu, J. Wu, et al., “Medvlm-r1: Incentivizing Medical Reasoning Capability of Vision-Language Models (VLMs) via Reinforcement Learning,” arXiv preprint, 2025, https://doi.org/10.48550/arXiv.2502.19634.

[50]

Y. Lai, J. Zhong, M. Li, et al., “Med-R1: Reinforcement Learning for Generalizable Medical Reasoning in Vision-Language Models,” arXiv preprint, 2025, https://doi.org/10.48550/arXiv.2503.13939.

[51]

P. Xia, K. Zhu, H. Li, et al., “MMed-RAG: Versatile Multimodal RAG System for Medical Vision Language Models,” arXiv preprint, 2024, https://doi.org/10.48550/arxiv.2410.13085.

RIGHTS & PERMISSIONS

2025 The Author(s). MedComm - Future Medicine published by John Wiley & Sons Australia, Ltd on behalf of Sichuan International Medical Exchange & Promotion Association (SCIMEA).

AI Summary AI Mindmap
PDF

39

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/