Large Language Models in Integrative Medicine: Progress, Challenges, and Opportunities

Hiu Fung Yip , Zeming Li , Lu Zhang , Aiping Lyu

Journal of Evidence-Based Medicine ›› 2025, Vol. 18 ›› Issue (2) : e70031

PDF
Journal of Evidence-Based Medicine ›› 2025, Vol. 18 ›› Issue (2) : e70031 DOI: 10.1111/jebm.70031
REVIEW

Large Language Models in Integrative Medicine: Progress, Challenges, and Opportunities

Author information +
History +
PDF

Abstract

Integrating Traditional Chinese Medicine (TCM) and Modern Medicine faces significant barriers, including the absence of unified frameworks and standardized diagnostic criteria. While Large Language Models (LLMs) in Medicine hold transformative potential to bridge these gaps, their application in integrative medicine remains underexplored and methodologically fragmented. This review systematically examines LLMs' development, deployment, and challenges in harmonizing Modern and TCM practices while identifying actionable strategies to advance this emerging field. This review aimed to provide insight into the following aspects. First, it summarized the existing LLMs in the General Domain, Modern Medicine, and TCM from the perspective of their model structures, number of parameters and domain-specific training data. We highlighted the limitations of existing LLMs in integrative medicine tasks through benchmark experiments and the unique applications of LLMs in Integrative Medicine. We discussed the challenges during the development and proposed possible solutions to mitigate them. This review synthesizes technical insights with practical clinical considerations, providing a roadmap for leveraging LLMs to bridge TCM's empirical wisdom with modern medical systems. These AI-driven synergies could redefine personalized care, optimize therapeutic outcomes, and establish new standards for holistic healthcare innovation.

Keywords

artificial intelligence / generative artificial intelligence / integrative medicine / Large Language Model / precision medicine

Cite this article

Download citation ▾
Hiu Fung Yip, Zeming Li, Lu Zhang, Aiping Lyu. Large Language Models in Integrative Medicine: Progress, Challenges, and Opportunities. Journal of Evidence-Based Medicine, 2025, 18(2): e70031 DOI:10.1111/jebm.70031

登录浏览全文

4963

注册一个新账户 忘记密码

References

[1]

H. Xu, Y. Zhang, P. Wang, et al., “A Comprehensive Review of Integrative Pharmacology-Based Investigation: A Paradigm Shift in Traditional Chinese Medicine,” Acta Pharmaceutica Sinica B 11, no. 6 (2021): 1379-1399.

[2]

W.-J. Wang and T. Zhang, “Integration of Traditional Chinese Medicine and Western Medicine in the Era of Precision Medicine,” Journal of Integrative Medicine 15, no. 1 (2017): 1-7.

[3]

L. Y. Kong and R. X. Tan, “Artemisinin, a Miracle of Traditional Chinese Medicine,” Natural Product Reports 32, no. 12 (2015): 1617-1621.

[4]

J. Fleckenstein, D. Irnich, N. Goldman, et al., “Adenosine A1 Receptors Mediate Local Anti-nociceptive Effects of Acupuncture,” Deutsche Zeitschrift Für Akupunktur 53, no. 3 (2010): 38-39.

[5]

G. K. K. Leung, S. W. H. Wong, G. K. B. Ng, and K. N. Hung, “Concomitant Use of Western and Chinese Medicine Treatments in Neurosurgical Patients in Hong Kong,” Chinese Journal of Integrative Medicine (2011), https://doi.org/10.1007/s11655-011-0818-8.

[6]

OpenAI, J. Achiam, S. Adler, S. Agarwal et al., eds. GPT-4 Technical Report (2024), https://arxiv.org/abs/2303.08774.

[7]

K. R. Laukamp, R. A. Terzis, J. M. Werner, et al., “Monitoring Patients With Glioblastoma by Using a Large Language Model: Accurate Summarization of Radiology Reports With GPT-4,” Radiology 312, no. 1 (2024): e232640.

[8]

S. Li and B. Zhang, “Traditional Chinese Medicine Network Pharmacology: Theory, Methodology and Application,” Chinese Journal of Natural Medicines 11, no. 2 (2013): 110-120.

[9]

X. Li, G. Yang, X. Li, et al., “Traditional Chinese Medicine in Cancer Care: A Review of Controlled Clinical Studies Published in Chinese,” PLoS ONE 8, no. 4 (2013): e60338.

[10]

L. Lu, S. Ni, X. He, Y. Huang, X. Chen, and Z. Yang, “From Tradition to Evidence-Base: Leveraging TCM Human Use Experience in Modern Drug Development,” Pharmacological Research—Modern Chinese Medicine 13 (2024): 100535.

[11]

A. Radford and K. Narasimhan, eds., Improving Language Understanding by Generative Pre-Training (2018).

[12]

A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever, eds., Language Models Are Unsupervised Multitask Learners (2019).

[13]

H. Touvron, L. Martin, K. R. Stone, et al., “Llama 2: Open Foundation and Fine-Tuned Chat Models,” ArXiv (2023). abs/2307.09288.

[14]

A. Dubey, A. Jauhri, A. Pandey, et al., “The Llama 3 Herd of Models,” ArXiv (2024). abs/2407.21783.

[15]

H. Touvron, T. Lavril, G. Izacard, et al., “LLaMA: Open and Efficient Foundation Language Models,” ArXiv (2023). abs/2302.13971.

[16]

A. Yang, B. Yang, B. Hui, et al., “Qwen2 Technical Report,” ArXiv (2024). abs/2407.10671.

[17]

R. Gan, Z. Wu, R. Sun, et al., “Ziya2: Data-Centric Learning Is All LLMs Need,” ArXiv (2023). abs/2311.03301.

[18]

T. G. A. Zeng, B. Xu, B. Wang, et al., “ChatGLM: A Family of Large Language Models From GLM-130B to GLM-4 all Tools,” ArXiv (2024). abs/2406.12793.

[19]

T. L. Scao, A. Fan, C. Akiki, E. Pavlick, S. Ili'c, D. Hesslow, et al., “BLOOM: A 176B-Parameter Open-Access Multilingual Language Model,” ArXiv (2022). abs/2211.05100.

[20]

A. M. Yang, B. Xiao, B. Wang, et al., “Baichuan 2: Open Large-Scale Language Models,” ArXiv (2023). abs/2309.10305.

[21]

R. Rafailov, A. Sharma, E. Mitchell, S. Ermon, C. D. Manning, and C. Finn, “Direct Preference Optimization: Your Language Model Is Secretly a Reward Model,” In Proceedings of the 37th International Conference on Neural Information Processing Systems (New Orleans, LA, USA: Curran Associates Inc., 2024): 2338.

[22]

E. Ullah, A. Parwani, M. M. Baig, and R. Singh, “Challenges and Barriers of Using Large Language Models (LLM) Such as ChatGPT for Diagnostic Medicine With a Focus on Digital Pathology—A Recent Scoping Review,” Diagnostic Pathology 19, no. 1 (2024): 43.

[23]

P. K. Kanithi, C. Ce, M. A. Pimentel, et al., “MEDIC: Towards a Comprehensive Framework for Evaluating LLMs in Clinical Applications,” ArXiv (2024). abs/2409.07314.

[24]

K. Singhal, S. Azizi, T. Tu, et al., “Large Language Models Encode Clinical Knowledge,” Nature 620 (2022): 172-180.

[25]

J. Wang, Y. Zhang, L. Zhang, et al., “Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence,” ArXiv (2022). abs/2209.02970.

[26]

J. Bai, S. Bai, Y. Chu, et al., “Qwen Technical Report,” ArXiv (2023). abs/2309.16609.

[27]

A. Vaswani, N. M. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, and A. N. Gomez, “Attention Is All You Need,” In Proceedings of the 31st International Conference on Neural Information Processing Systems (Curran Associates Inc., 2017): 6000-6010.

[28]

Z. Du, Y. Qian, X. Liu, M. Ding, J. Qiu, Z. Yang, and J. Tang, “GLM: General Language Model Pretraining With Autoregressive Blank Infilling,” In Annual Meeting of the Association for Computational Linguistics (Association for Computational Linguistics, 2022): 320-335.

[29]

Y. Cui, Z. Yang, and X. Yao, “Efficient and Effective Text Encoding for Chinese LLaMA and Alpaca,” ArXiv (2023). abs/2304.08177.

[30]

M. Shoeybi, M. Patwary, R. Puri, P. LeGresley, J. Casper, and B. Catanzaro, “Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism,” ArXiv (2019). abs/1909.08053.

[31]

J. Li, D. Li, S. Savarese, and S. C. H. Hoi, “BLIP-2: Bootstrapping Language-Image Pre-training With Frozen Image Encoders and Large Language Models,” In International Conference on Machine Learning (JMLR.org, 2023).

[32]

M. Ding, Z. Yang, W. Hong, W. Zheng, C. Zhou, and D. Yin, “CogView: Mastering Text-to-Image Generation via Transformers,” In Neural Information Processing Systems (Curran Associates, Inc., 2021): 19822-19835, https://proceedings.neurips.cc/paper_files/paper/2021/file/a4d92e2cd541fca87e4620aba658316d-Paper.pdf.

[33]

T.-Y. Lin, M. Maire, S. J. Belongie, J. Hays, P. Perona, and D. Ramanan, “Microsoft COCO: Common Objects in Context,” In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds). Computer Vision - ECCV 2014. ECCV 2014. Lecture Notes in Computer Science, vol 8693. Springer, Cham., 2014), https://doi.org/10.1007/978-3-319-10602-1_48.

[34]

P. Sharma, N. Ding, S. Goodman, and R. Soricut, “Conceptual Captions: A Cleaned, Hypernymed, Image Alt-text Dataset for Automatic Image Captioning,” In: Gurevych, I., Miyao, Y., (eds). Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (Association for Computational Linguistics, 2018): 2556-2565, https://aclanthology.org/P18-1238/.

[35]

D. Demner-Fushman, M. D. Kohli, M. B. Rosenman, et al., “Preparing a Collection of Radiology Examinations for Distribution and Retrieval,” Journal of the American Medical Informatics Association: JAMIA 23, no. 2 (2015): 304-310.

[36]

J. Chen, H. Guo, K. Yi, B. A. Li, and M Elhoseiny, “VisualGPT: Data-Efficient Adaptation of Pretrained Language Models for Image Captioning,” In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2022): 18030-18040.

[37]

Y. Tian, R. Gan, Y. Song, J. Zhang, and Y. Zhang, eds., ChiMed-GPT: A Chinese Medical Large Language Model With Full Training Regime and Better Alignment to Human Preferences (Bangkok, Thailand: Association for Computational Linguistics, 2024 August).

[38]

L. Luo, J. Ning, Y. Zhao, et al., “Taiyi: A Bilingual Fine-Tuned Large Language Model for Diverse Biomedical Tasks,” Journal of the American Medical Informatics Association: JAMIA 31, no. 9 (2024): 1865-1874, https://doi.org/10.1093/jamia/ocae037.

[39]

S. Yang, H. Zhao, S. Zhu, G. Zhou, H. Xu, Y. Jia, and H. Zan, “Zhongjing: Enhancing the Chinese Medical Capabilities of Large Language Model Through Expert Feedback and Real-World Multi-Turn Dialogue,” In AAAI Conference on Artificial Intelligence (AAAI Press, 2024), https://doi.org/10.1609/aaai.v38i17.29907.

[40]

Y. Chen, Z. Wang, X. Xing, et al., “BianQue: Balancing the Questioning and Suggestion Ability of Health LLMs With Multi-Turn Health Conversations Polished by ChatGPT,” ArXiv (2023). abs/2310.15896.

[41]

H. Zhang, J. Chen, F. Jiang, F. Yu, Z. Chen, and J. Li, “HuatuoGPT, Towards Taming Language Model to Be a Doctor,” In: Bouamor, H., Pino, J., Bali, K., (eds). Findings of the Association for Computational Linguistics: EMNLP 2023 (Association for Computational Linguistics, 2023): 10859-10885, https://aclanthology.org/2023.findings-emnlp.725/.

[42]

W. Y. Wei Zhu and X. Wang, ShenNong-TCM: A Traditional Chinese Medicine Large Language Model (GitHub, 2023).

[43]

W. Rongsheng, CareGPT: Medical LLM, Open Source Driven for a Healthy Future (GitHub, 2023).

[44]

Z. Bao, W. Chen, S. Xiao, K. Ren, J. Wu, C. Zhong, et al., “DISC-MedLLM: Bridging General Large Language Models and Real-World Medical Consultation,” ArXiv (2023). abs/2308.14346.

[45]

R. Wang, Y. Duan, C. Lam, J. Chen, J. Xu, and H. Chen, “IvyGPT: InteractiVe Chinese Pathway Language Model in Medical Domain,” Artificial Intelligence. CICAI 2023. Lecture Notes in Computer Science (Singapore: Springer Nature Singapore; 2024).

[46]

X. Zhou, K. Yang, H. Tian, X. Dong, K. Xu, and R. Hua. TCMLLM GitHub; 2023, https://github.com/2020MEAI/TCMLLM.

[47]

R. Hua, X. Dong, Y. Wei, et al., “Lingdan: Enhancing Encoding of Traditional Chinese Medicine Knowledge for Clinical Reasoning Tasks With Large Language Models,” Journal of the American Medical Informatics Association 31, no. 9 (2024): 2019-2029.

[48]

T. Tu, A. Palepu, M. Schaekermann, et al., “Towards Conversational Diagnostic AI,” ArXiv (2024). abs/2401.05654.

[49]

K. Saab, T. Tu, W.-H. Weng, et al., “Capabilities of Gemini Models in Medicine,” ArXiv (2024). abs/2404.18416.

[50]

C. Wu, W. Lin, X. Zhang, Y. Zhang, W. Xie, and Y. Wang, “PMC-LLaMA: Toward Building Open-Source Language Models for Medicine,” Journal of the American Medical Informatics Association 31, no. 9 (2024): 1833-1843.

[51]

E. Bolton, A. Venigalla, M. Yasunaga, et al., “BioMedLM: A 2.7B Parameter Language Model Trained on Biomedical Text,” ArXiv (2024). abs/2403.18421.

[52]

T. J. Pollard, A. E. W. Johnson, J. D. Raffa, L. A. Celi, R. G. Mark, and O. Badawi, “The eICU Collaborative Research Database, a Freely Available Multi-Center Database for Critical Care Research,” Scientific Data 5 (2018).

[53]

A. E. W. Johnson, L. Bulgarelli, L. Shen, et al., “MIMIC-IV, a Freely Accessible Electronic Health Record Dataset,” Scientific Data 10 (2023).

[54]

O. Ben Shoham and N. Rappoport, “CPLLM: Clinical Prediction With Large Language Models,” PLOS Digital Health 3, no. 12 (2024): e0000680.

[55]

C. Ce, P. K. Kanithi, P. Munjal, et al., “Med42 - Evaluating Fine-Tuning Strategies for Medical LLMs: Full-Parameter vs. Parameter-Efficient Approaches,” ArXiv (2024). abs/2404.14779.

[56]

T. B. Brown, B. Mann, N. Ryder, et al., “Language Models Are Few-Shot Learners,” in Proceedings of the 34th International Conference on Neural Information Processing Systems (Vancouver, BC, Canada: Curran Associates Inc.; 2020): 159.

[57]

M. Nye, A. Andreassen, G. Gur-Ari, et al., “Show Your Work: Scratchpads for Intermediate Computation With Language Models,” ArXiv (2021). abs/2112.00114.

[58]

J. Wei, X. Wang, D. Schuurmans, et al., “Chain-of-Thought Prompting Elicits Reasoning in Large Language Models,” In Proceedings of the 36th International Conference on Neural Information Processing Systems (Red Hook, NY, USA: Curran Associates Inc.; 2022).

[59]

D. Zhou, N. Scharli, L. Hou, et al., “Least-to-Most Prompting Enables Complex Reasoning in Large Language Models,” ArXiv (2022). abs/2205.10625.

[60]

D. Jin, E. Pan, N. Oufattole, W.-H. Weng, H. Fang, and P. Szolovits, “What Disease Does This Patient Have? A Large-Scale Open Domain Question Answering Dataset From Medical Exams,” Applied Sciences 11 (2021): 6421.

[61]

Pal A, Umapathi LK, Sankarasubbu M, eds., MedMCQA: A Large-Scale Multi-Subject Multi-Choice Dataset for Medical Domain Question Answering (ACM Conference on Health, Inference, and Learning, 2022).

[62]

G. Yang, X. Liu, J. Shi, Z. Wang, and G. Wang, “TCM-GPT: Efficient Pre-training of Large Language Models for Domain Adaptation in Traditional Chinese Medicine,” Computer Methods and Programs in Biomedicine Update 6 (2024): 100158.

[63]

O. Chin, N. S. Jamil, Z. Zainudin, N. A. Hitam, N. Ibrahim, and A. H. Ahmad Sa'ahiry, “OYEN: A User-Centric LLM-Based Bilingual Healthcare Chatbot,” in 2024 5th International Conference on Artificial Intelligence and Data Sciences (AiDAS) (2024): 435-440, https://doi.org/10.1109/AiDAS63860.2024.10730140.

[64]

H. Zhang, X. Wang, Z. Meng, Y. Jia, and D. Xu, “Qibo: A Large Language Model for Traditional Chinese Medicine,” ArXiv (2024). abs/2403.16056.

[65]

S. Wei, X. Peng, Y.-f Wang, et al., “BianCang: A Traditional Chinese Medicine Large Language Model,” ArXiv (2024). abs/2411.11027.

[66]

Y. Li, Z. Li, K. Zhang, R. Dan, S. Jiang, and Z. Y. ChatDoctor, “A Medical Chat Model Fine-Tuned on a Large Language Model Meta-AI (LLaMA) Using Medical Domain Knowledge,” Cureus 15, no. 6 (2023): e40895, https://doi.org/10.7759/cureus.40895.

[67]

T. Han, L. C. Adams, J.-M. Papaioannou, et al., “MedAlpaca—An Open-Source Collection of Medical Conversational AI Models and Training Data,” ArXiv (2023). abs/2304.08247.

[68]

H. Xiong, S. Wang, Y. Zhu, et al., “DoctorGLM: Fine-tuning Your Chinese Doctor Is Not a Herculean Task,” ArXiv (2023). abs/2304.01097.

[69]

M. Xu, “MedicalGPT: Training Medical GPT Model,” 2023, https://github.com/shibing624/MedicalGPT.

[70]

P. Densen, “Challenges and Opportunities Facing Medical Education,” Transactions of the American Clinical and Climatological Association 122 (2011): 48-58.

[71]

Q. C. Thawakar, A. M. Shaker, S. S. Mullappilly, H. Cholakkal, R. M. Anwer, and S. Khan, eds., XrayGPT: Chest Radiographs Summarization Using Large Medical Vision-Language Models (Bangkok, Thailand: Association for Computational Linguistics, 2024 August).

[72]

M. Tran, P. Schmidle, S. J. Wagner, et al., “Generating Highly Accurate Pathology Reports From Gigapixel Whole Slide Images With HistoGPT,” Medrxiv (2024). 2024.03.15.24304211.

[73]

D. Van Veen, C. Van Uden, L. Blankemeier, et al., “Adapted Large Language Models Can Outperform Medical Experts in Clinical Text Summarization,” Nature Medicine 30, no. 4 (2024): 1134-1142.

[74]

L. Y. Jiang, X. C. Liu, N. P. Nejatian, et al., “Health System-Scale Language Models Are All-Purpose Prediction Engines,” Nature 619, no. 7969 (2023): 357-362.

[75]

L. Perharic-Walton and V. Murray, “Toxicity of Chinese Herbal Remedies,” Lancet 340, no. 8820 (1992): 674.

[76]

S. H. Kang, J. I. Kim, K. H. Jeong, et al., “Clinical Characteristics of 159 Cases of Acute Toxic Hepatitis,” The Korean Journal of Hepatology 14, no. 4 (2008): 483-492.

[77]

K. Huang, P. Chandak, Q. Wang, et al., “A Foundation Model for Clinician-Centered Drug Repurposing,” Nature Medicine 30 (2024): 3601-3613, https://doi.org/10.1038/s41591-024-03233-x.

[78]

R. Facharztmagazine, “Rare Disease Day 2021: Familiäres Mittelmeerfieber,” Orthopädie & Rheuma 24 (2021): 62.

[79]

Y. Yang, X. Li, G. Chen, et al., “Traditional Chinese Medicine Compound (Tongxinluo) and Clinical Outcomes of Patients With Acute Myocardial Infarction: The CTS-AMI Randomized Clinical Trial,” Jama 330, no. 16 (2023): 1534-1545.

[80]

S. Liu, Z. Wang, Y. Su, et al., “A Neuroanatomical Basis for Electroacupuncture to Drive the Vagal-Adrenal Axis,” Nature 598, no. 7882 (2021): 641-645.

[81]

T. H. Kung, M. Cheatham, A. Medenilla, et al., “Performance of ChatGPT on USMLE: Potential for AI-Assisted Medical Education Using Large Language Models,” PLOS Digital Health 2 (2022).

[82]

H. Wang, W. Wu, Z. Dou, L. He, and L. Yang, “Performance and Exploration of ChatGPT in Medical Examination, Records and Education in Chinese: Pave the Way for Medical AI,” International Journal of Medical Informatics 177 (2023): 105173.

[83]

M. Reid, N. Savinov, D. Teplyashin, et al., “Gemini 1.5: Unlocking Multimodal Understanding Across Millions of Tokens of Context,” ArXiv (2024). abs/2403.05530.

[84]

J. T. Chu, M. P. Wang, C. Shen, K. Viswanath, T. H. Lam, and S. S.-C. Chan, “How, When and Why People Seek Health Information Online: Qualitative Study in Hong Kong,” Interactive Journal of Medical Research 6 (2017).

[85]

S. Montagna, S. Ferretti, L. C. Klopfenstein, A. Florio, and M. F Pengo, “Data Decentralisation of LLM-Based Chatbot Systems in Chronic Disease Self-Management,” In Proceedings of the 2023 ACM Conference on Information Technology for Social Good (Association for Computing Machinery, 2023): 205-212, https://doi.org/10.1145/3582515.3609536.

[86]

A. Mahmood, S. Cao, M. Stiber, V. N. Antony, and C.-M. Huang, “Voice Assistants for Health Self-Management: Designing for and With Older Adults,” ArXiv (2024). abs/2409.15488.

[87]

Z. Ye, C. Tian, Y. Yan, et al., “Implementation Evaluation of Clinical Practice Guidelines for Integrative Medicine,” Medical Journal of Peking Union Medical College Hospital 15, no. 2 (2024): 413-421.

[88]

Y. Hong, H. Xu, Y. Liu, et al., “DDID: A Comprehensive Resource for Visualization and Analysis of Diet-Drug Interactions,” Brief Bioinform 25, no. 3 (2024): bbae212.

[89]

S. Gerke, T. Minssen, and G. Cohen, “Ethical and Legal Challenges of Artificial Intelligence-Driven Healthcare,” Artificial Intelligence in Healthcare (2020): 295-336.

[90]

J. Jiao, S. Afroogh, Y. Xu, and C. Phillips, “Navigating LLM Ethics: Advancements, Challenges, and Future Directions,” ArXiv (2024). abs/2406.18841.

[91]

J. Pawelczyk, M. Kraus, L. Eckl, et al., “Attitude of Aspiring Orthopaedic Surgeons towards Artificial Intelligence: A Multinational Cross-Sectional Survey Study,” Archives of Orthopaedic and Trauma Surgery 144, no. 8 (2024): 3541-3552.

[92]

T. Wu, L. Luo, Y.-F. Li, S. Pan, T.-T. Vu, and G. Haffari, “Continual Learning for Large Language Models: A Survey,” ArXiv (2024). abs/2402.01364.

[93]

M. R. Rezaei, R. S. Fard, J. Parker, R. G. Krishnan, and M. Lankarany, eds., Adaptive Knowledge Graphs Enhance Medical Question Answering: Bridging the Gap Between LLMs and Evolving Medical Knowledge (2025), https://arxiv.org/abs/2502.13010.

[94]

K. Lu, Z. Liang, D. Pan, et al., “Med-R2: Crafting Trustworthy LLM Physicians Through Retrieval and Reasoning of Evidence-Based Medicine,” ArXiv (2025). abs/2501.11885.

[95]

L. Luo, J. Ju, B. Xiong, Y.-F. Li, G. Haffari, and P. S. ChatRule, “Mining Logical Rules With Large Language Models for Knowledge Graph Reasoning,” ArXiv (2023). abs/2309.01538.

[96]

J. Yang, H. Xu, S. Mirzoyan, et al., “Poisoning Medical Knowledge Using Large Language Models,” Nature Machine Intelligence 6, no. 10 (2024): 1156-1168.

[97]

A. Belyaeva, J. Cosentino, F. Hormozdiari, K. Eswaran, S. Shetty, G. C. Corrado, et al., eds. Multimodal LLMs for Health Grounded in Individual-Specific Data (ML4MHD, 2023).

[98]

S. Xu, L. Yang, C. J. Kelly, et al., “ELIXR: Towards a General Purpose X-ray Artificial Intelligence System Through Alignment of Large Language Models and Radiology Vision Encoders,” ArXiv (2023). abs/2308.01317.

[99]

J. Yan, J. Cai, Z. Xu, et al., “Tongue Crack Recognition Using Segmentation Based Deep Learning,” Scientific Reports 13 (2023): 511.

[100]

J. Lim, J. Li, X. Feng, et al., “Machine Learning Classification of Polycystic Ovary Syndrome Based on Radial Pulse Wave Analysis,” BMC Complementary Medicine and Therapies 23, no. 1 (2023): 409.

[101]

W. C. Lam, A. Lyu, and Z. Bian, “ICD-11: Impact on Traditional Chinese Medicine and World Healthcare Systems,” Pharmaceutical Medicine 33, no. 5 (2019): 373-377.

[102]

Q. Xu, “WHO International Standard Terminologies on Traditional Chinese Medicine: Use in Context, Creatively,” Integrative Medicine in Nephrology and Andrology 10 (2023), https://doi.org/10.1007/s11655-011-0818-8.

[103]

J. X. Wang, R. Sun, D. X. Si, et al., “Clinical Practice Guidelines of Chinese Patent Medicine in China: A Critical Review,” Complementary Therapies in Medicine 85 (2024): 103077.

[104]

J. Zhao, Q. Zha, M. Jiang, H. Cao, and A. Lu, “Expert Consensus on the Treatment of Rheumatoid Arthritis With Chinese Patent Medicines,” The Journal of Alternative and Complementary Medicine 19, no. 2 (2012): 111-118.

[105]

Q. Lv, G. Chen, H. He, et al., “TCMBank-the Largest TCM Database Provides Deep Learning-Based Chinese-Western Medicine Exclusion Prediction,” Signal Transduction and Targeted Therapy 8, no. 1 (2023): 127.

[106]

Y. Chang, P. Kang, T. Cui, et al., “Pharmacological Inhibition of Demethylzeylasteral on JAK-STAT Signaling Ameliorates Vitiligo,” Journal of Translational Medicine 21, no. 1 (2023): 434.

[107]

F. Cheng, X. Wang, W. Song, et al., “Biologic Basis of TCM Syndromes and the Standardization of Syndrome Classification,” Journal of Traditional Chinese Medical Sciences 1, no. 2 (2014): 92-97.

[108]

H. H. Zhao, J. X. Chen, and Q. Shi, “Gel Electrophoresis Analysis on Plasma Differential Protein in Patients With Unstable Angina of Blood-Stasis Pattern,” Zhongguo Zhong Xi Yi Jie He Za Zhi Zhongguo Zhongxiyi Jiehe Zazhi 30, no. 5 (2010): 488-492.

[109]

Q. Qiu, C. Li, Y. Wang, et al., “Plasma Metabonomics Study on Chinese Medicine Syndrome Evolution of Heart Failure Rats Caused by LAD Ligation,” BMC Complementary and Alternative Medicine 14, no. 1 (2014): 232.

[110]

Y. Gu, C. Lu, Q. Zha, et al., “Plasma Metabonomics Study of Rheumatoid Arthritis and Its Chinese Medicine Subtypes by Using Liquid Chromatography and Gas Chromatography Coupled With Mass Spectrometry,” Molecular Biosystems 8, no. 5 (2012): 1535-1543.

[111]

F. Bahari and M. Yavari, “Hot and Cold Theory: Evidence in Systems Biology,” in Hot and Cold Theory: The Path towards Personalized Medicine, ed. M. Yavari (Cham: Springer International Publishing, 2021): 135-160.

[112]

P. Lewis, E. Perez, A. Piktus, F. Petroni, V. Karpukhin, N. Goyal, et al., “Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks,” in NIPS'20: Proceedings of the 34th International Conference on Neural Information Processing Systems, ed. H Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin (Curran Associates Inc., 2020): 9459-9474.

[113]

Ma X, Gong Y, He P, Zhao H, Duan N, eds., Query Rewriting in Retrieval-Augmented Large Language Models2023 December (Singapore: Association for Computational Linguistics).

[114]

D. Arora, A. Kini, S. R. Chowdhury, N. Natarajan, G. Sinha, and A. Sharma, “GAR-Meets-RAG Paradigm for Zero-Shot Information Retrieval,” ArXiv (2023). abs/2310.20158.

[115]

R. A. Jacobs, M. I. Jordan, S. J. Nowlan, and G. E. Hinton, “Adaptive Mixtures of Local Experts,” Neural Computation 3 (1991): 79-87.

[116]

N. M. Shazeer, A. Mirhoseini, K. Maziarz, et al., “Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer,” ArXiv (2017). abs/1701.06538.

[117]

Z. Duan and B. Liu, “Promoting the Development of Integrated Traditional Chinese Medicine and Western Medicine With Confidence in Traditional Chinese Medicine,” Zhonghua Wei Zhong Bing Ji Jiu Yi Xue 33, no. 10 (2021): 1153-1156.

[118]

X. Wang, J. Zhao, E. Marostica, W. Yuan, J. Jin, J. Zhang, et al., “A Pathology Foundation Model for Cancer Diagnosis and Prognosis Prediction,” Nature 634, no. 8035 (2024): 970-978.

[119]

H. Xu, N. Usuyama, J. Bagga, et al., “A Whole-Slide Foundation Model for Digital Pathology From Real-World Data,” Nature 630, no. 8015 (2024): 181-188.

RIGHTS & PERMISSIONS

2025 The Author(s). Journal of Evidence-Based Medicine published by Chinese Cochrane Center, West China Hospital of Sichuan University and John Wiley & Sons Australia, Ltd.

AI Summary AI Mindmap
PDF

15

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/