Artificial intelligence (AI) is penetrating higher medical education; however, its adoption remains low. A PRISMA-S search of the Web of Science database from 2020 to 2024, utilizing the search terms “artificial intelligence,” “medicine,” “education,” and “ethics,” reveals this trend. Four key areas of AI application in medical education are examined for their potential benefits: Educational support (such as personalized distance education), radiology (diagnostics), virtual reality (VR) (visualization and simulations), and generative text engines (GenText), such as ChatGPT (from the production of notes to syllabus design). However, significant ethical risks accompany AI adoption, and specific concerns are linked to each of these four areas. While AI is recognized as an important support tool in medical education, its slow integration hampers learning and diminishes student motivation, as evidenced by the challenges in implementing VR. In radiology, data-intensive training is hindered by poor connectivity, particularly affecting learners in developing countries. Ethical risks, such as bias in datasets (whether intentional or unintentional), need to be highlighted within educational programs. Students must be informed of the possible motivation behind the introduction of social and political bias in datasets, as well as the profit motive. Finally, the ethical risks accompanying the use of GenText are discussed, ranging from student reliance on instant text generation for assignments, which can hinder the development of critical thinking skills, to the potential danger of relying on AI-generated learning and treatment plans without sufficient human moderation.
Coronavirus disease 19 (COVID-19), caused by the severe acute respiratory syndrome-coronavirus-2 virus, is commonly diagnosed through imaging techniques such as computed tomography (CT) scans, which reveal characteristic lung lesions. In this study, we propose a computer-aided diagnosis (CAD) system to assist in the early detection of COVID-19 from CT lung slices, leveraging advanced machine-learning algorithms for precise and efficient analysis. To achieve this, we developed a CAD system that diagnoses COVID-19 from CT lung slices. An adaptive Wiener filter was applied to remove noise from the CT images. The chest tissues were then segmented using an optimal thresholding method to extract regions of interest, which represent the COVID-19 lesions under investigation. The feature vectors were divided into training and testing with an 80/20 ratio. A wrapper-based flower pollination algorithm was employed alongside the k-nearest neighbor classifier to select the optimal feature set. These selected features were subsequently used to train a support vector machine (SVM) classifier. With feature selection, the SVM achieved an accuracy of 91.30% on a real-time dataset, outperforming seven other machine learning classifiers (radial basis function-SVM, k nearest neighbor, linear discriminant analysis, random forest, naïve Bayes, AdaBoost, extreme gradient boosting) and four deep learning classifiers (convolutional neural network, recurrent neural network, long short term memory, Bidirectional long short term memory). For the publicly available COVID-19 CT dataset, an accuracy of 88.18% was achieved. In conclusion, our COVID-19 CAD system improves diagnostic accuracy, with future work aimed at enhancing efficiency and expanding to covariant detection and severity assessment.
Developing a reliable rapid screening protocol for highly infectious diseases like COVID-19 is of paramount interest since it facilitates the isolation of infected patients from the rest of the population. Reverse-transcription polymerase chain reaction (RT-PCR) test is presently the most widely accepted gold-standard test to detect COVID-19. In this method, the RNA of the virus is duplicated by a process called reverse transcription to form DNA for facilitating the copying process. Fluorescent dye is attached to the viral genetic material and copied billions of times through the process called polymerase chain reaction. Enhanced fluorescence is used to identify the presence of genetic material of the virus. These tests are time-consuming and have significant false negatives, i.e., a person with COVID-19 might be categorized as not having the virus. Large-scale RT-PCR testing has its own share of problems such as logistics, availability and affordability in underdeveloped nations, and reliability of the test results. Machine learning algorithms can act as a cheaper supplementary/alternative diagnostic tool for the testing process. In the current study, using publicly available chest X-ray image datasets, different convolutional neural network (CNN)-based models were developed for efficient identification of COVID-19 infected patients, and their efficacies were compared. Key innovations in training the CNNs are discussed. Our results indicate that EfficientNet, SeResNext, and ResNet are best at classifying normal, pneumonia and COVID-19 cases, respectively. The ResNet architecture with transfer learning performed best at detecting COVID-19 with an accuracy of 94%, a rate far superior to that in the RT-PCR test, which is typically in the range of 70 - 80%. This is particularly attractive as an additional noninvasive protocol since such technology-augmented detection is likely to help in reducing the psychological refractory period due to COVID-19 infections. Toward the healthy lung initiative in the post-COVID-19 era, we propose close coupling of the present diagnostic protocols with digital approaches to ensure more reliable personal care within the ambit of large-scale pandemic control mechanisms. Such integration with emerging technological tools can create a benchmark for the first line of defense against future global pandemics.
Spinal diseases are among the most prevalent health issues in modern society, significantly impacting patients’ quality of life. Diagnosing conditions such as disc herniation and spinal deformity requires advanced medical imaging techniques, including X-rays, magnetic resonance imaging (MRI), computed tomography, and nuclear magnetic resonance. Spine MRI is particularly crucial due to its ability to provide high-resolution images of soft tissues, essential for accurate diagnosis. However, the manual segmentation of spine MRI images is labor-intensive and inadequate for large-scale quantitative analysis. Thus, developing automated spinal MRI segmentation methods is critical to alleviating doctors’ workload and enhancing diagnostic efficiency. In this study, we propose a novel asymmetric U-Net architecture designed to improve the precision of reconstructing complex structures and details by increasing the depth of the upsampling side. The model incorporates adjacent-scale skip connections to control parameters while maintaining high segmentation accuracy. In addition, residual connections on the upsampling side prevent gradient vanishing, thereby enhancing the network’s feature learning and representation capabilities. Experimental results indicate that this method significantly reduces training time and increases model accuracy compared to traditional approaches, marking a substantial advancement in automated spinal MRI segmentation. This innovative approach holds promise for improving clinical outcomes and optimizing the workflow in medical imaging departments.
Magnetic resonance imaging (MRI) is critical in the diagnosis of neurodegenerative diseases, enabling the detection of brain lesions. Recent research has examined metallic nanoparticles (NPs) as MRI contrast agents (CAs) that can enhance lesion visibility by altering relaxation times. This study investigates the effects of metal oxide NPs on MRI relaxation times and brain lesion signals and proposes an algorithm for automated relaxation time determination using these NPs. The utilized NPs were synthesized using the sol‒gel method and characterized using Fourier-transform infrared spectroscopy and X-ray diffraction. MRI scans were performed on a phantom infused with varying concentrations of each metal oxide NP to assess changes in pixel signal intensities and relaxation rates. Our analysis involved segmenting the MRI images to focus on regions with different NP concentrations. The algorithm computed the longitudinal relaxation time for each region, revealing that Fe2O3 NPs exhibited the most substantial effect on signal intensity and relaxation time. The results indicated a high correlation (r = 0.9977), demonstrating strong agreement and confirming the reliability of our method. Our findings suggest that metallic oxide NPs, particularly Fe2O3, can considerably alter magnetization and act as effective negative CAs in MRI. These capabilities can improve the monitoring and treatment efficacy of neurodegenerative diseases. Our method for quantifying longitudinal relaxation times can potentially enhance routine clinical MRI assessments, offering a promising tool for future clinical applications.
Automated image analysis and classification have increasingly advanced in recent decades owing to machine learning and computer vision. In particular, deep learning (DL) architectures have become popular in resource-limited and labor-restricted environments such as the health-care sector. Transformer architecture, a DL method with self-attention mechanism, excels in natural language processing; however, its application in image-based diagnosis in health-care sector remains limited. Herein, the feasibility, bottlenecks, and performance of transformers in magnetic resonance imaging (MRI)-based brain tumor classification were investigated. To this end, a vision transformer (ViT) model was trained and tested using the popular Brain Tumor Segmentation (BraTS) 2015 dataset for glioma classification. Owing to limited data availability, domain adaptation techniques were used to pretrain the ViT model and the BraTS 2015 dataset was used for its fine-tuning. With the model only trained for 100 epochs, the confusion matrix for the two-class problem of tumor and nontumor classification showed an overall classification accuracy of 81.8%. In conclusion, although convolutional neural networks are traditionally used for DL-based medical image classification owing to their attention mechanism and long-range dependency-capturing capability, ViTs can outperform them in MRI-based brain tumor classification.
Databases tied to mental and behavioral health surveys suffer from the issue of missing data when participants skip the entire survey, which affects the data quality and sample size. These missing data patterns were investigated and the imputation performance was evaluated in Simons Foundations Powering Autism Research for Knowledge, a large-scale autism cohort consists of over 117,000 participants. Four common methods were assessed - Multiple imputation by chained equations (MICE), K-nearest neighbors (KNN), MissForest, and multiple imputation with denoising autoencoders (MIDAS). In a complete subset of 15,196 autism participants, three types of missingness patterns were simulated. We observed that MIDAS and KNN performed the best as the random missingness rate increased and when blockwise missingness was simulated. The average computational times were each 10 min for MIDAS and KNN, 35 min for MissForest, and 290 min for MICE. MIDAS and KNN both provide promising imputation performance in mental and behavioral health survey data that exhibit blockwise missingness patterns.
Nasopharyngeal carcinoma (NPC), particularly prevalent in regions such as Malaysia, is a significant health concern often linked to Epstein-Barr virus (EBV) infection. The EBV nuclear antigen 1 (EBNA1), crucial for EBV survival and NPC tumorigenicity, has emerged as a potential therapeutic target for EBV-positive NPC. In this study, we utilized quantitative structure-activity relationship (QSAR) models to predict potential inhibitors of EBNA1. These models were developed based on the molecular fingerprints of known EBNA1 inhibitors, using both classification and regression approaches. Our QSAR classification models demonstrated consistently high precision, recall, F1 score, and accuracy scores across the training set. The top-performing models, constructed using logistic regression algorithms, achieved perfect precision scores of 1.000 in the test set evaluation. These models’ recall, F1 score, and accuracy scores were 0.571, 0.727, and 0.667, respectively. On the other hand, the best-performing model among the regression models was built using the sequential minimal optimization regression algorithm, achieving a correlation coefficient of 0.703. The mean absolute error and root mean square error of this QSAR regression model were 0.173 and 0.217, respectively, whereas the relative absolute error was 0.689. We screened the enamine advanced compound library using this regression model to predict compounds with potential EBNA1 inhibitory effects. This led to the identification of the top 10 compounds with the most promising predicted EBNA1 inhibitory properties.
This study examines the impact of workforce diversity, particularly the presence of Black/African American staff, on client retention in opioid use disorder (OUD) treatment, recognizing the historically low retention rates among Black and Hispanic populations in such programs. Using a novel machine learning technique called “causal forest,” we explored the heterogeneous treatment effects of staff diversity on client retention, aiming to identify strategies that enhance client retention and improve treatment outcomes. Analyzing data from four waves of the National Drug Abuse Treatment System Survey spanning the years 2000, 2005, 2014, and 2017 (n = 627), we focus on the relationship between workforce diversity and retention. The findings revealed diversity-related variations in retention across 61 out of 627 OUD treatment programs (<10%), with potential beneficial effects attenuated by other program characteristics. These characteristics include programs that are more likely to be private-for-profit, have lower percentages of Black and Latino clients, lower staff-to-client ratios, higher proportions of staff with graduate degrees, and lower percentages of unemployed clients. Our results suggest that workforce diversity alone is insufficient for improving retention. Programs with characteristics linked to greater retention are better positioned to leverage a diverse workforce to enhance retention, offering important implications for policy and program design to better support Black clients with OUDs.
This study examines the acceptance of artificial intelligence (AI)-based diagnostic alternatives compared to traditional biological testing through a randomized scenario experiment in the domain of neurodegenerative diseases (NDs). A total of 3225 pairwise choices of ND risk-prediction tools were offered to participants, with 1482 choices comparing AI with the biological saliva test and 1743 comparing AI+ with the saliva test (with AI+ using digital consumer data, in addition to electronic medical data). Overall, only 36.68% of responses showed preferences for AI/AI+ alternatives. Stratified by AI sensitivity levels, acceptance rates for AI/AI+ were 35.04% at 60% sensitivity and 31.63% at 70% sensitivity, and increased markedly to 48.68% at 95% sensitivity (p <0.01). Similarly, acceptance rates by specificity were 29.68%, 28.18%, and 44.24% at 60%, 70%, and 95% specificity, respectively (P < 0.01). Notably, AI consistently garnered higher acceptance rates (45.82%) than AI+ (28.92%) at comparable sensitivity and specificity levels, except at 60% sensitivity, where no significant difference was observed. These results highlight the nuanced preferences for AI diagnostics, with higher sensitivity and specificity significantly driving acceptance of AI diagnostics.