2025-06-10 2025, Volume 8 Issue 2

  • Select all
  • research-article
    Jian Zhou, Minghao Kou, Rui Tang, Xuan Wang, Hao Ma, Xiang Li, Yoriko Heianza, Lu Qi

    Objective: To investigate whether the excess premature mortality risk related to hypertension could be reduced or eliminated through joint risk factor control.

    Methods: A total of 70 898 hypertensive participants and 224 069 matched non-hypertensive participants without cancer or cardiovascular disease (CVD) at baseline were included and followed from 2006 to 2022. The degree of joint risk factor control was evaluated based on the major cardiovascular risk factors, including blood pressure, body mass index, waist circumference, low-density lipoprotein cholesterol, glycated haemoglobin, albuminuria, smoking, and physical activity. Cox proportional hazards models were used to investigate the relationship between degree of risk factor control and premature mortality.

    Results: Each additional risk factor control was associated with a 15%, 12%, 24%, and 11% lower risk of premature all-cause mortality, premature cancer mortality, premature CVD mortality, and premature other mortality, respectively. Optimal risk factor control (≥6 risk factors) was associated with a 55% [hazard ratio (HR): 0.45, 95% confidence interval (CI): 0.40-0.51], 50% (HR: 0.50, 95% CI: 0.41-0.60), 67% (HR: 0.33, 95% CI: 0.26-0.42), and 50% (HR: 0.50, 95% CI: 0.40-0.62) lower risk of premature all-cause mortality, premature cancer mortality, premature CVD mortality, and premature other mortality, respectively. Hypertensive participants with 3, 2, 4, and 2 or more controlled risk factors showed no excess risk of premature all-cause mortality, premature cancer mortality, premature CVD mortality, and premature other mortality, respectively, compared to matched non-hypertensive participants.

    Conclusions: In this cohort study of UK Biobank participants, degree of joint risk factor control shows gradient inverse association with risk of premature mortality in hypertensive participants; optimal risk factor control may eliminate hypertension-related excess risk of premature mortality.

  • research-article
    Guiyi Ji, Wenxin Luo, Yuan Zhu, Bojiang Chen, Miye Wang, Lili Jiang, Ming Yang, Weiwei Song, Peiji Yao, Tao Zheng, He Yu, Rui Zhang, Chengdi Wang, Renxin Ding, Xuejun Zhuo, Feng Chen, Jinnan Li, Xiaolong Tang, Jinghong Xian, Tingting Song, Jun Tang, Min Feng, Jun Shao, Weimin Li

    Background: Current lung cancer screening guidelines recommend annual low-dose computed tomography (LDCT) for high-risk individuals. However, the effectiveness of LDCT in non-high-risk individuals remains inadequately explored. With the incidence of lung cancer steadily increasing among non-high-risk individuals, this study aims to assess the risk of lung cancer in non-high-risk individuals and evaluate the potential of thin-section LDCT reconstruction combined with artificial intelligence (LDCT-TRAI) as a screening tool.

    Methods: A real-world cohort study on lung cancer screening was conducted at the West China Hospital of Sichuan University from January 2010 to July 2021. Participants were screened using either LDCT-TRAI or traditional thick-section LDCT without AI (traditional LDCT). The AI system employed was the uAI-ChestCare software. Lung cancer diagnoses were confirmed through pathological examination.

    Results: Among the 259 121 enrolled non-high-risk participants, 87 260 (33.7%) had positive screening results. Within 1 year, 728 (0.3%) participants were diagnosed with lung cancer, of whom 87.1% (634/728) were never-smokers, and 92.7% (675/728) presented with stage I disease. Compared with traditional LDCT, LDCT-TRAI demonstrated a higher lung cancer detection rate (0.3% vs. 0.2%, P < 0.001), particularly for stage I cancers (94.4% vs. 83.2%, P < 0.001), and was associated with improved survival outcomes (5-year overall survival rate: 95.4% vs. 81.3%, P < 0.0001).

    Conclusion: These findings highlight the importance of expanding lung cancer screening to non-high-risk populations, especially never-smokers. LDCT-TRAI outperformed traditional LDCT in detecting early-stage cancers and improving survival outcomes, underscoring its potential as a more effective screening tool for early lung cancer detection in this population.

  • research-article
    Changshu Li, Shufan Liang, Xue Wang, Su Lui, Chengdi Wang

    Objectives: To investigate the risk factors in patients with drug-resistant tuberculosis (DR-TB) and clinical characteristics related to unfavorable anti-TB treatment outcomes.

    Methods: A total of 961 pulmonary tuberculosis (TB) patients were included at West China Hospital of Sichuan University from January 2008 to November 2023. We analyzed the differences of clinical characteristics between DR-TB and drug-sensitive tuberculosis (DS-TB), and then compared these features in DR-TB patients with different outcomes. Multivariable logistic regression models were employed to quantify risk factors associated with DR-TB and adverse treatment outcomes.

    Results: Among 961 pulmonary TB patients, a history of anti-TB treatment [odds ratio (OR), 3.289; 95% confidence interval (CI), 2.359-4.604] and CT-scan cavities (OR, 1.512; 95% CI, 1.052-2.168) increased DR-TB risk. A total of 214 DR-TB patients were followed for a median of 24.5 months. Among them, 116/214 (54.2%) patients achieved favorable outcomes. Prior anti-TB treatment (OR, 1.927; 95% CI, 1.033-3.640), multidrug-resistant tuberculosis (MDR-TB) (OR, 2.558; 95% CI, 1.272-5.252), positive sputum bacteriology (OR, 2.116; 95% CI, 1.100-4.134), and pleural effusion (OR, 2.097; 95% CI, 1.093-4.082) were associated with unfavorable outcomes, while isoniazid-resistant TB patients showed better outcomes (OR, 0.401; 95% CI, 0.181-0.853). The clinical model for unfavorable outcome prediction of DR-TB achieved an area under the curve (AUC) of 0.754 (95% CI, 0.690-0.818).

    Conclusions: Treatment history of anti-TB not only increases the risk of the emergence of DR-TB, but also potentially leads to treatment failure during re-treatment in DR-TB patients. Drug resistance subtypes, radiological characteristics, and the results of sputum smear or culture may affect the treatment outcome of DR-TB.

  • research-article
    Yunfang Yu, Yuxin Shen, Yujie Tan, Yisikandaer Yalikun, Tian Tian, Qingqing Tang, Qiyun Ou, Yue Zhu, Miaoyun Long

    Objective: This prospective observational cohort real-world study evaluates and compares the efficacy and prognosis of ultrasound (US) and gene-based microwave ablation (MWA) and surgical treatment in patients with low-risk papillary thyroid carcinoma (PTC), emphasizing the influence of genetic mutations on low-risk patient selection.

    Background: MWA, a minimally invasive technique, is increasingly recognized in the management of PTC. While traditional criteria for ablation focus on tumor size, number, and location, the impact of genetic mutations on treatment efficacy remains underexplored.

    Methods: A total of 201 patients with low-risk PTC without metastasis were prospectively enrolled. All patients underwent US and next-generation sequencing to confirm low-risk status. Patients chose either ablation or surgery and were monitored until November 2024. Efficacy and complications were assessed using thyroid US and contrast-enhanced US.

    Results: The median follow-up of this study is 12 months. There is no significant difference between the ablation group (3.0%) and the surgery group (1.0%) in disease free survival (P = 0.360). However, the surgery group exhibited a significantly higher complication rate, particularly for temporary hypoparathyroidism (P < 0.001). Ablation offers notable advantages, including shorter treatment duration, faster recovery, less intraoperative blood loss, and reduced costs (P < 0.001), while maintaining favorable safety and comparable efficiency.

    Conclusions: For patients with low-risk genetic mutations, ablation provides comparable efficacy and disease free survival to surgery, with significant benefits in safety, recovery, and overall cost. Guided by US and next-generation sequencing, precise patient selection enhances the potential of ablation as a promising, minimally invasive alternative to surgery in the management of low-risk PTC.

  • research-article
    Zhiyang Feng, Elke Burgermeister, Anna Philips, Tao Zuo, Weijie Wen

    The gut virome, an essential component of the intestinal microbiome, constitutes ∼0.1% of the total microbial biomass but contains a far greater number of particles than bacteria, with phages making up 90%-95% of this virome. This review systematically examines the developmental patterns of the gut virome, focusing on factors influencing its composition, including diet, environment, host genetics, and immunity. Additionally, it explores the gut virome's associations with various diseases, its interactions with gut bacteria and the immune system, and its emerging clinical applications.

  • research-article
    Xin Chen, Li Tai Fang, Zhong Chen, Wanqiu Chen, Hongjin Wu, Bin Zhu, Malcolm Moos. Jr, Andrew Farmer, Xiaowen Zhang, Wei Xiong, Shusheng Gong, Wendell Jones, Christopher E. Mason, Shixiu Wu, Chunlin Xiao, Charles Wang

    Background: Single-cell RNA-sequencing (scRNA-seq) has emerged as a powerful tool for cancer research, enabling in-depth characterization of tumor heterogeneity at the single-cell level. Recently, several scRNA-seq copy number variation (scCNV) inference methods have been developed, expanding the application of scRNA-seq to study genetic heterogeneity in cancer using transcriptomic data. However, the fidelity of these methods has not been investigated systematically.

    Methods: We benchmarked five commonly used scCNV inference methods HoneyBADGER, CopyKAT, CaSpER, inferCNV, and sciCNV. We evaluated their performance across four different scRNA-seq platforms using data from our previous multicenter study. We evaluated scCNV performance further using scRNA-seq datasets derived from mixed samples consisting of five human lung adenocarcinoma cell lines and also sequenced tissues from a small cell lung cancer patient and used the data to validate our findings with a clinical scRNA-seq dataset.

    Results: We found that the sensitivity and specificity of the five scCNV inference methods varied, depending on the selection of reference data, sequencing depth, and read length. CopyKAT and CaSpER outperformed other methods overall, while inferCNV, sciCNV, and CopyKAT performed better than other methods in subclone identification. We found that batch effects significantly affected the performance of subclone identification in mixed datasets in most methods we tested.

    Conclusion: Our benchmarking study revealed the strengths and weaknesses of each of these scCNV inference methods and provided guidance for selecting the optimal CNV inference method using scRNA-seq data.