Over the past decade, the global increase in cancer prevalence and cancer-related mortality has fueled extensive research to enhance the effectiveness of cancer treatments. Such efforts include the fabrication of lab-grown tissues and organs for transplantation, and the development of in vitro models for cancer drug testing and screening. Notably, three-dimensional (3D) tissue models offer advantages over two-dimensional cultures and have benefited from recent advancements in cutting-edge techniques like 3D printing, enabling the reconstruction of various tumor models in vitro. In this review, we focus on recent progress in in vitro 3D tumor models, with particular emphasis on the roles of 3D bioprinting and artificial intelligence. Furthermore, we provide future perspectives on employing bioprinting to develop tumor models that accurately mimic the complexity and heterogeneity of real tumor microenvironments.
The evolution of artificial intelligence (AI) in the pharmaceutical industry spans from its early applications in automating administrative tasks to its pivotal role in drug discovery, personalized medicine, and safety enhancement. AI contributes significantly to data analysis, real-time process monitoring, defect detection, predictive maintenance, and compliance assurance, thereby enhancing efficiency, accuracy, and regulatory adherence. This review assesses the transformative functions of AI integration in revolutionizing quality assurance and validation across the pharmaceutical industry and highlights the contribution of AI in advancing quality frameworks, core values, and smart manufacturing. Moreover, the role of AI in enhancing validation processes and the critical importance of data and algorithms are discussed. As AI continues to reshape the pharmaceutical industry, it emphasizes the synergy between technological innovation and quality enhancement.
Parkinson’s disease (PD) is a neurological syndrome or condition that occurs due to a deficit of dopamine-producing neurons in the substantia nigra. Diagnosing PD in its early stages is difficult, as its symptoms often resemble those of other neurological diseases. Therefore, recognizing reliable biomarkers is important for discriminating PD from related conditions, monitoring disease progression, and evaluating responses to therapeutic interventions. PD biomarkers are categorized into the following classes: clinical, neuroimaging, biochemical and proteomic, and genetic. Ongoing research aims to discover the most effective PD biomarkers that could help doctors identify PD risk and accelerate early diagnosis. Artificial intelligence (AI) methods, including deep learning and machine learning, have become increasingly significant in recent years due to their ability to evaluate and process large volumes of medical data with high accuracy. Furthermore, these methods have contributed significantly to the early diagnosis and effective treatment of various diseases, such as cancer and neurological conditions, such as Alzheimer’s disease, PD, and multiple sclerosis. Given that PD affects a large population, the present study aims to review the applications of AI approaches in the early diagnosis of PD and the latest advancements in the field of PD biomarkers. Promising results have been obtained using various AI algorithms, which are helpful not only in identifying the PD stages but also in supporting early diagnosis. However, the implementation of these techniques in clinical practice faces challenges, including data quality and variability, model interpretability, and the need for interdisciplinary collaboration.
Genetic feature discovery is essential for understanding complex diseases and traits. This comprehensive review provides an in-depth comparison of differential expression analysis methods and statistical hypothesis tests—such as Student’s t-test, Chi-square test, analysis of variance, Empirical Bayes methods, and Significant Analysis of Microarrays—used in genetic feature marker discovery. Our analysis highlights the strengths and weaknesses of these approaches in terms of methodologies, applications, performance, and accuracy. While the statistical tests provide straightforward interpretation, machine learning techniques provide superior capabilities for handling high-dimensional data and complex biological interactions. We conducted two mini-experiments: (i) Identification of differentially expressed genes, upregulated genes and downregulated genes using statistical tools (i.e., Student’s t-test and Welch’s t-test) under different conditions (normalization methods and p-value correction strategies) using the GSE31699 dataset from the NCBI Gene Expression Omnibus, and (ii) gene set enrichment analysis—covering Kyoto Encyclopedia of Genes and Genomes pathways and Gene Ontology terms like Biological process, Cellular component and Molecular function—using the GSE30760 dataset with the DAVID 2021 tool. Furthermore, we discussed the potential of hybrid approaches combining statistical tests with machine learning and optimization techniques for enhanced feature discovery. Future work will focus on multi-omics data integration, the development of explainable AI methods, and scalable algorithms. This review aims to serve as a comprehensive guide for researchers involved in genetic marker identification, highlighting both statistical and computational perspectives on differential expression and gene set enrichment studies.
Organizational artificial intelligence (AI) readiness in healthcare has gained significant attention, given the excitement of transforming healthcare in new and innovative ways. While many healthcare organizational leaders have expressed a strong desire to leverage AI to automate processes, improve productivity, and increase staff/patient satisfaction, most are still in the beginning stages of identifying, strategizing, and implementing foundational elements that transform an organization from one that utilizes AI to one that embraces AI as a key partner in delivering healthcare. Given the complex nature of integrating technology tools/solutions like AI within clinical and operational workflows, the paper will highlight widespread interest for AI integration among healthcare leaders; introduce individual and organizational factors that affect adoption of AI tools/technologies; and provide an overview of organizational AI readiness examples to assist healthcare organizations on their AI journey.
Predicting patient length of stay (LoS) is crucial for optimizing resource allocation and enhancing healthcare efficiency. However, achieving accurate LoS predictions remains a challenging and complex task. This study presents a non-disease-specific predictive model that integrates machine learning (ML) methods and Bayesian inference techniques to accurately predict hospital LoS using static patient admission data. While traditional statistical regression techniques have been widely used for LoS prediction within hospital settings, this research investigates the capabilities of ML and Bayesian inference algorithms in this context. By leveraging Bayesian inference techniques, our model captures complex relationships within the data and quantifies uncertainty, offering a more nuanced understanding of the outcomes. This methodological approach offers a more comprehensive and probabilistically grounded framework for LoS prediction, allowing more informed decision-making in resource allocation and patient management. Among the evaluated models, extreme boosting and support vector machine regressor models demonstrated the highest efficiency, achieving mean squared logarithmic error (MSLE) values of 0.23 and 0.24, respectively. The Bayesian model also showed competitive performance with an MSLE of 0.25. While it did not outperform other models in terms of error metrics, the Bayesian model’s ability to provide additional uncertainty output enhances its utility, offering valuable supplementary information for informed decision-making. This research highlights the potential of ML and Bayesian inference in predicting patient LoS, emphasizing their significance in effective resource allocation and patient care management within the healthcare sector.
Chemical reaction prediction is a vital application of artificial intelligence. While Transformer models are widely used for this task, they often overlook deeper-level semantic information. In addition, the traditional Transformer model suffers from a decline in prediction performance and shows poor generalization when faced with different representations of the same molecule. To address these challenges, we propose a dual encoder-based reaction prediction method tailored for multilevel organic chemistry. Our approach began with the introduction of synergistic dual-encoder architecture: The atomic encoder focused on inter-atomic attention weights. In contrast, the molecular encoder employed a molecular maximum dimension reduction algorithm to identify key chemical features. We then performed multilevel feature fusion by combining the outputs from both the atomic and molecular encoders. Finally, we applied an optimized contrast loss to enhance the model’s robustness. The results indicated that this method outperformed existing models across all four datasets, significantly improving generalization performance and contributing to advancements in artificial intelligence-driven drug development and research.
With massive training data and sufficient computing resources, large language models (LLMs) have demonstrated impressive capabilities. These models can rapidly respond to questions in almost all domains and are capable of retrieving, synthesizing, and summarizing information. The capabilities demonstrated by LLMs can enhance our livelihood and foster innovation. Nonetheless, in some professional domains, the focus is not only on response speed but also on higher requirements for response reliability. For example, in the medical domain, the reliability of information provided by the model poses a great risk to subsequent diagnosis and treatment, especially when the language is not English. In specific domains, domain-specific knowledge can be used to refine pre-trained LLMs to improve their performance in specific tasks. In this study, we aimed to build an LLM for epilepsy, called EpilepsyLLM. We constructed an epilepsy knowledge dataset in Japanese for LLM fine-tuning, and the dataset contained basic information on epilepsy, common treatment methods and drugs, and important notes on patients’ lives. Using the constructed dataset, we refined several different pre-trained models with supervised learning. In the evaluation process, we applied multiple metrics to measure the reliability of the LLMs’ output. The experimental results highlighted that the fine-tuned EpilepsyLLM can provide more reliable and specialized epilepsy responses.
Anxiety disorders (ADs) rank among the most prevalent mental health problems, especially in older people. The high risk and prevalence of ADs underscore the need for effective mental health care. Artificial intelligence has gained popularity in the diagnosis and prediction of medical conditions and diseases, including mental health problems. In this study, we developed an adapted bagging ensemble machine learning system that can be used for the diagnosis and prediction of ADs and can address the challenges posed by extremely imbalanced data from the Trinity-Ulster-Department of Agriculture study. Statistical techniques were used to identify the risk factors for ADs. Feature selection and feature engineering were conducted based on the analysis of biomarker risk factors. Five machine learning methods have been used in the developed system to build weak learner submodels, yielding promising prediction results. Some risk factors were identified. These findings will benefit the early prediction of ADs in our future studies.
The rise of chronic diseases and pandemics, such as COVID-19 has emphasized the need for effective patient data processing while ensuring privacy through anonymization and de-identification of protected health information. Anonymized data facilitates research without compromising patient confidentiality. This paper introduces expert small artificial intelligence (AI) models developed using the large language model (LLM)-in-the-loop methodology to meet the demand for domain-specific de-identification of named entity recognition (NER) models. These models overcome the privacy risks associated with LLMs used through application programming interfaces by eliminating the need to transmit or store sensitive data. More importantly, they consistently outperform LLMs in de-identification tasks, offering superior performance and reliability. Our de-identification NER models, developed in eight languages—English, German, Italian, French, Romanian, Turkish, Spanish, and Arabic—achieved F1-macro score averages of 0.931, 0.960, 0.955, 0.937, 0.930, 0.963, 0.957, and 0.922, respectively. These results establish our de-identification NER models as the most accurate healthcare anonymization solutions, surpassing existing small models and even general-purpose LLMs, such as GPT-4o. While Part I of this series introduced the LLM-in-the-loop methodology for biomedical document translation, this second paper showcases its success in developing cost-effective expert small NER models in de-identification tasks. Our findings lay the groundwork for future healthcare AI innovations, including biomedical entity and relation extraction, demonstrating the value of specialized models for domain-specific challenges.
Global healthcare expenditures continue to rise, posing substantial economic challenges, particularly for low- and middle-income countries (LMICs), where resource constraints intensify the impact. Accurate forecasting, efficient resource allocation, and equitable policy development are essential to address these growing pressures. This study presents a hybrid analytical framework that integrates generative artificial intelligence (AI) with traditional econometric and machine learning models to analyze and predict trends of healthcare expenditure. Utilizing data from the World Bank and World Health Organization, we applied generative adversarial networks, hierarchical clustering, support vector machines, and autoregressive integrated moving average models to uncover spending patterns, simulate policy scenarios, and tackle disparities in health investment. Generative AI played a pivotal role by augmenting sparse and incomplete datasets, particularly from underrepresented LMICs, identifying anomalies, and generating realistic synthetic data to support robust forecasting. This enabled the development of more inclusive, equity-focused health resource planning tools. The results demonstrate improved forecasting accuracy and offer deeper insights into regional and income-based differences in expenditure trends. By combining traditional machine learning with cutting-edge generative models, this study advances a scalable, data-driven approach to support global health decision-making. Ultimately, generative AI is highlighted as a transformative enabler of equitable, informed strategies in the management of global healthcare expenditures.
Recent advancements in artificial intelligence (AI) are reshaping core functions within healthcare and biopharmaceutical industries, particularly in diagnostics, personalized care, and drug development. However, the success of these innovations hinges on how well institutions manage their implementation. This systematic review investigates how innovation management influences AI adoption in healthcare and biopharma, highlighting both progress and persistent challenges. Following the preferred reporting items for systematic reviews and meta-analyses guidelines, this review was conducted using literature sourced from five major databases - PubMed, IEEE Xplore, Scopus, Web of Science, and Embase - focusing on peer-reviewed studies published between 2015 and 2024. A total of 82 studies were included, comprising 42 quantitative, 30 qualitative, and 10 mixed-methods studies. The population, intervention, comparison, and outcome framework guided study selection, while quality was assessed using the Joanna Briggs Institute checklist and Cochrane Risk of Bias 2.0 tool. Findings reveal that AI systems enable earlier disease detection, streamline patient triage, and improve operational workflows. In biopharma, companies, such as Moderna have shortened vaccine development timelines by integrating AI into molecular design. However, significant roadblocks remain, particularly regarding data privacy, infrastructure costs, and insufficient AI literacy among healthcare providers, especially in low- and middle-income countries. These barriers underscore the need for proactive innovation management approaches. To promote sustainable and ethical AI integration, this study recommends the development of governance frameworks, targeted workforce training, and increased interdisciplinary collaboration. As AI continues to evolve, managing its adoption thoughtfully will be essential to balancing technological potential with clinical realities and patient-centered care.