A comprehensive evaluation of large language models in mining gene relations and pathway knowledge
Muhammad Azam, Yibo Chen, Micheal Olaolu Arowolo, Haowang Liu, Mihail Popescu, Dong Xu
A comprehensive evaluation of large language models in mining gene relations and pathway knowledge
Understanding complex biological pathways, including gene–gene interactions and gene regulatory networks, is critical for exploring disease mechanisms and drug development. Manual literature curation of biological pathways cannot keep up with the exponential growth of new discoveries in the literature. Large‐scale language models (LLMs) trained on extensive text corpora contain rich biological information, and they can be mined as a biological knowledge graph. This study assesses 21 LLMs, including both application programming interface (API)‐based models and open‐source models in their capacities of retrieving biological knowledge. The evaluation focuses on predicting gene regulatory relations (activation, inhibition, and phosphorylation) and the Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway components. Results indicated a significant disparity in model performance. API‐based models GPT‐4 and Claude‐Pro showed superior performance, with an F1 score of 0.4448 and 0.4386 for the gene regulatory relation prediction, and a Jaccard similarity index of 0.2778 and 0.2657 for the KEGG pathway prediction, respectively. Open‐source models lagged behind their API‐based counterparts, whereas Falcon‐180b and llama2‐7b had the highest F1 scores of 0.2787 and 0.1923 in gene regulatory relations, respectively. The KEGG pathway recognition had a Jaccard similarity index of 0.2237 for Falcon‐180b and 0.2207 for llama2‐7b. Our study suggests that LLMs are informative in gene network analysis and pathway mapping, but their effectiveness varies, necessitating careful model selection. This work also provides a case study and insight into using LLMs das knowledge graphs. Our code is publicly available at the website of GitHub (Muh‐aza).
biomedical text mining / gene–gene interaction / KEGG pathway / large language model
[1] |
Stoney R, Robertson DL, Nenadic G, Schwartz J-M. Mapping biological process relationships and disease perturbations within a pathway network. NPJ Syst Bio Appl. 2018; 4 (1): 22.
|
[2] |
Kanehisa M, Goto S. KEGG: Kyoto encyclopedia of genes and genomes. Nucleic Acids Res. 2000; 28 (1): 27- 30.
|
[3] |
Li Y, Xu H, Zhao H, Guo H, Liu S. Chatpathway: conversational large language models for biology pathway detection. In: NeurIPS 2023 AI for science Workshop; 2023.
|
[4] |
Liu X, McDuff D, Kovacs G, Galatzer‐Levy I, Sunshine J, Zhan J, et al. Large language models are few‐shot health learners. 2023. Preprint at arXiv:230515525.
|
[5] |
Li J, Sun Y, Johnson RJ, Sciaky D, Wei C-H, Leaman R, et al. Biocreative v CDR task corpus: a resource for chemical disease relation extraction. Database. 2016; 2016: baw068.
|
[6] |
Ouyang L, Wu J, Jiang X, Almeida D, Wainwright C, Mishkin P, et al. Training language models to follow instructions with human feedback. Adv Neural Inf Process Syst. 2022; 35: 27730- 44.
|
[7] |
Baidoo-Anu D, Ansah LO. Education in the era of generative artificial intelligence (AI): understanding the potential benefits of chatgpt in promoting teaching and learning. J AIDS HIV. 2023; 7 (1): 52- 62.
|
[8] |
Teebagy S, Colwell L, Wood E, Yaghy A, Faustina M. Improved performance of CHATGPT‐4 on the okap exam: a comparative study with CHATGPT‐3.5. 2023. Preprint at medRxiv: 23287957.
|
[9] |
Agarwal M, Goswami A, Sharma P. Evaluating CHATGPT‐3.5 and CLAUDE‐2 in answering and explaining conceptual medical physiology multiple‐choice questions. Cureus. 2023; 15.
|
[10] |
Boubdir M, Kim E, Ermis B, Fadaee M, Hooker S. Which prompts make the difference? Data prioritization for efficient human llm evaluation. 2023. Preprint at arXiv:231014424.
|
[11] |
Roziere B, Gehring J, Gloeckle F, Sootla S, Gat I, Tan XE, et al. Code llama: open foundation models for code. 2023. Preprint at arXiv:230812950.
|
[12] |
Luo H, Sun Q, Xu C, Zhao P, Lou J, Tao C, et al. Wizardmath: empowering mathematical reasoning for large language models via reinforced evol‐instruct. 2023. Preprint at arXiv:230809583.
|
[13] |
Penedo G, Malartic Q, Hesslow D, Cojocaru R, Cappelli A, Alobeidli H, et al. The refinedweb dataset for falcon llm: outperforming curated corpora with web data, and web data only. 2023. Preprint at arXiv:230601116.
|
[14] |
Touvron H, Martin L, Stone K, Albert P, Almahairi A, Babaei Y, et al. Llama 2: open foundation and fine‐tuned chat models. 2023. Preprint at arXiv:230709288.
|
[15] |
Bai J, Bai S, Chu Y, Cui Z, Dang K, Deng X, et al. Qwen technical report. 2023. Preprint at arXiv:230916609.
|
[16] |
Soong D, Sridhar S, Si H, Wagner J‐S, Sá ACC, Yu CY, et al. Improving accuracy of GPT‐3/4 results on biomedical data using a retrieval‐augmented language model. 2023. Preprint at arXiv:230517116.
|
[17] |
Crawl, C. Common crawl maintains a free, open repository of web crawl data. 2023.
|
[18] |
PubMed®. National Center for Biotechnology Information. 2023.
|
[19] |
Central P. PubMed central® (PMC) is a free full‐text archive of biomedical and life sciences journal literature at the U.S. National institutes of health's national library of medicine (NIH/NLM); 2022.
|
[20] |
Zi Y, Roy K, Narayanan V, Gaur M, Sheth A. IERL: interpretable ensemble representation learning—combining crowdsourced knowledge and distributed semantic representations. 2023. Preprint at arXiv:230613865.
|
[21] |
UniProt. Q9udy2 zo2_human.
|
[22] |
Schneeberger EE, Lynch RD. The tight junction: a multifunctional complex. Am J Physiol Cell Physiol. 2004; 286 (6): C1213- 28.
|
[23] |
Wei W, Ji S. Cellular senescence: molecular mechanisms and pathogenicity. J Cell Physiol. 2018; 233 (12): 9121- 35.
|
[24] |
Shah NH, Entwistle D, Pfeffer MA. Creation and adoption of large language models in medicine. JAMA. 2023; 330 (9): 866- 9.
|
[25] |
Qurashi AW, Holmes V, Johnson AP. Document processing: methods for semantic text similarity analysis. In: 2020 international conference on INnovations in intelligent SysTems and applications (INISTA). IEEE; 2020. p. 1- 6.
|
[26] |
Chen Q, Deng C. Bioinfo‐bench: a simple benchmark framework for llm bioinformatics skills evaluation. 2023. Preprint at bioRxiv. 2023.10.18.563023.
|
[27] |
Kanehisa M, Sato Y, Furumichi M, Morishima K, Tanabe M. New approach for understanding genome variations in kegg. Nucleic Acids Res. 2019; 47 (D1): D590- 5.
|
[28] |
Zheng C, Huang M. Exploring prompt‐based few‐shot learning for grounded dialog generation. 2021. Preprint at arXiv:210906513.
|
[29] |
Park G, Yoon B‐J, Luo X, López‐Marrero V, Johnstone P, Yoo S, et al. Comparative performance evaluation of large language models for extracting molecular interactions and pathway knowledge. 2023. Preprint at arXiv:230708813.
|
[30] |
Nilsson F, Tuvstedt J. GPT‐4 as an automatic grader: the accuracy of grades set by GPT‐4 on introductory programming assignments. 2023.
|
[31] |
Matsui K, Utsumi T, Aoki Y, Maruki T, Takeshima M, Yoshikazu T. Large language model demonstrates human‐comparable sensitivity in initial screening of systematic reviews: a semi‐automated strategy using GPT‐3.5. Available at SSRN 4520426.
|
[32] |
Wu S, Koo M, Blum L, Black A, Kao L, Scalzo F, et al. A comparative study of open‐source large language models, GPT‐4 and claude 2: multiple‐choice test taking in nephrology. 2023. Preprint at arXiv:230804709.
|
[33] |
Fu Y, Peng H, Khot T, Lapata M. Improving language model negotiation with self‐play and in‐context learning from ai feedback. 2023. Preprint at arXiv:230510142.
|
[34] |
Anil R, Dai AM, Firat O, Johnson M, Lepikhin D, Passos A, et al. Palm 2 technical report. 2023. Preprint at arXiv:230510403.
|
[35] |
Qin H, Ji G‐P, Khan S, Fan D‐P, Khan FS, Gool LV. How good is google bard’s visual understanding? An empirical study on open challenges. 2023. Preprint at arXiv:2307.15016.
|
[36] |
Huang H, Feng Y, Shi C, Xu L, Yu J, Yang S. Free‐bloom: zero‐shot text‐to‐video generator with llm director and ldm animator. 2023. Preprint at arXiv:230914494.
|
[37] |
Qi B, Zhang K, Li H, Tian K, Zeng S, Chen Z‐R, et al. Large language models are zero shot hypothesis proposers. 2023. Preprint at arXiv:231105965.
|
[38] |
Wang W, Haddow B, Birch A, Peng W. Assessing the reliability of large language model knowledge. 2023. Preprint at arXiv:231009820.
|
[39] |
Zhang Z, Zheng C, Tang D, Sun K, Ma Y, Bu Y, et al. Balancing specialized and general skills in llms: the impact of modern tuning and data strategy. 2023. Preprint at arXiv:231004945.
|
[40] |
Cheng J, Liu X, Zheng K, Ke P, Wang H, Dong Y, et al. Black‐box prompt optimization: aligning large language models without model training. 2023. Preprint at arXiv:231104155.
|
[41] |
Yu D, Kaur S, Gupta A, Brown‐Cohen J, Goyal A, Arora S. Skill‐mix: a flexible and expandable family of evaluations for AI models. 2023. Preprint at arXiv:231017567.
|
[42] |
Jiang AQ, Sablayrolles A, Mensch A, Bamford C, Chaplot DS, Casas Ddl, et al. Mistral 7b. 2023. Preprint at arXiv:231006825.
|
[43] |
Xu L, Li A, Zhu L, Xue H, Zhu C, Zhao K, et al. Superclue: a comprehensive Chinese large language model benchmark. 2023. Preprint at arXiv:230715020.
|
[44] |
Yang Y, Zhang Q, Li C, Marta DS, Batool N, Folkesson J. Human‐centric autonomous systems with llms for user command reasoning. 2023. Preprint at arXiv:231108206.
|
[45] |
Liu B, Chen C, Liao C, Gong Z, Wang H, Lei Z, et al. Mftcoder: boosting code llms with multitask fine‐tuning. 2023. Preprint at arXiv:231102303.
|
[46] |
Labatut V, Cherifi H. Accuracy measures for the comparison of classifiers. 2012. Preprint at arXiv:12073790.
|
[47] |
Fernando B, Herath S. Anticipating human actions by correlating past with the future with jaccard similarity measures. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition; 2021. p. 13224- 33.
|
/
〈 | 〉 |