2026-11-15 2026, Volume 20 Issue 11

  • Select all
  • REVIEW ARTICLE
    Hu DING , Pengxiang HUA , Zhen HUANG

    The development of artificial intelligence (AI) techniques has brought revolutionary changes across various realms. In particular, the use of AI-assisted methods to accelerate chemical research has become a popular and rapidly growing trend, leading to numerous groundbreaking works. In this paper, we provide a comprehensive review of current AI techniques in chemistry from a computational perspective, considering various aspects in the design of methods. We begin by discussing the characteristics of data from diverse sources, followed by an overview of various representation methods. Next, we review existing models for several topical tasks in the field, and conclude by highlighting some key challenges that warrant further attention.

  • REVIEW ARTICLE
    Kaiyuan TIAN , Linbo QIAO , Baihui LIU , Gongqingjian JIANG , Shanshan LI , Dongsheng LI

    Scientific research faces high costs and inefficiencies with traditional methods, but the rise of deep learning and large language models (LLMs) offers innovative solutions. This survey reviews transformer-based LLM applications across scientific fields such as biology, medicine, chemistry, and meteorology, underscoring their role in advancing research. However, the continuous expansion of model size has led to significant memory demands, hindering further development and application of LLMs for science. This survey systematically reviews and categorizes memory-efficient pre-training techniques for large-scale transformers, including algorithm-level, system-level, and hardware-software co-optimization. Taking AlphaFold 2 as an example, we demonstrate how tailored memory optimization methods reduce storage needs while preserving prediction accuracy. By bridging model efficiency and scientific application needs, we hope to provide insights for scalable and cost-effective LLM training in AI for science.

  • REVIEW ARTICLE
    Dong HAN , Cheng-Ye SU , Fan-Yi ZENG , Fang-Lue ZHANG , Miao WANG

    With the development of deep neural networks and differentiable rendering techniques, neural rendering methods, represented by Neural Radiance Fields (NeRF) and 3D Gaussian Splatting (3DGS), have made significant progress. NeRF represents a 3D scene by encoding the appearance and geometry of the scene through neural networks, which are conditioned on both position and viewpoint. In contrast, 3DGS models the scene with a set of Gaussian ellipsoids, allowing for efficient rendering through the rasterization of these ellipsoids into images. However, both two methods are limited to representing static scenes. The rendering and reconstruction of dynamic scenes are critical in virtual reality and computer graphics. As such, extending neural rendering methods from static to dynamic scenes has become an important area of research. This survey organizes dynamic scene rendering methods based on NeRF and 3DGS and categorizes them according to different motion representations. Furthermore, it highlights the relevant applications of dynamic scene rendering, such as autonomous driving, digital humans, and 4D generation. Finally, we summarize the development of dynamic scene rendering and discuss the remaining limitations and open challenges.

  • REVIEW ARTICLE
    Kaiyuan LIAO , Xiwei XUAN , Kwan-Liu MA

    Time series forecasting plays a critical role in numerous real-world applications, such as finance, healthcare, transportation, and scientific computing. In recent years, deep learning has become a powerful tool for modeling complex temporal patterns and improving forecasting accuracy. This survey provides an overview of recent deep learning approaches for time series forecasting, involving various architectures including RNNs, CNNs, GNNs, transformers, large language models, MLP-based models, and diffusion models. We first identify key challenges in the field, such as temporal dependency, efficiency, and cross-variable dependency, which drive the development of forecasting techniques. Then, the general advantages and limitations of each architecture are discussed to contextualize their adaptation in time series forecasting. Furthermore, we highlight promising design trends like multi-scale modeling, decomposition, and frequency-domain techniques, which are shaping the future of the field. This paper serves as a compact reference for researchers and practitioners seeking to understand the current landscape and future trajectory of deep learning in time series forecasting.

  • LETTER
    Yu ZHANG , Jing CHEN , Jiwei JIN , Feifei MA
  • REVIEW ARTICLE
    Xinyi GAO , Dongting XIE , Yihang ZHANG , Zhengren WANG , Chong CHEN , Conghui HE , Hongzhi YIN , Wentao ZHANG

    With the expansion of data availability, machine learning (ML) has achieved remarkable breakthroughs in both academia and industry. However, imbalanced data distributions are prevalent in various types of raw data and severely hinder the performance of ML by biasing the decision-making processes. To deepen the understanding of imbalanced data and facilitate the related research and applications, this survey systematically analyzes various real-world data formats and concludes existing researches for different data formats into four distinct categories: data re-balancing, feature representation, training strategy, and ensemble learning. This structured analysis helps researchers comprehensively understand the pervasive nature of imbalance across diverse data formats, thereby paving a clearer path toward achieving specific research goals. We provide an overview of relevant open-source libraries, spotlight current challenges, and offer novel insights aimed at fostering future advancements in this critical area of study.

  • LETTER
    Xueping WANG , Xueni GUO , Feihu YAN , Bing LI , Guangzhe ZHAO
  • REVIEW ARTICLE
    Mengyi LIU , Xieyang WANG , Jianqiu XU , Weijia YI , Ouri WOLFSON

    With the growing prevalence of data-driven decision-making, Text-to-SQL has emerged as a promising solution to lower the barrier to data access by translating natural language queries into executable SQL statements, thereby enhancing user interaction with databases. Despite notable progress driven by deep learning and large language models, significant challenges persist in handling complex queries. This paper presents a comprehensive review of the Text-to-SQL task, structured around two core stages: natural language understanding and natural language translation. Methods are categorized along the technical evolution trajectory into four types: rule-based, machine learning-based, pre-trained language model-based, and large language model-based approaches. Unlike previous surveys, which focus on specific techniques or partial aspects of Text-to-SQL, our work offers a two-stage analytical framework, highlights the impact of large models, and provides a comparative analysis of limitations and trade-offs. Through detailed examination of accuracy, generalization, expressiveness, and computational cost, this survey presents insights into the advantages and disadvantages of each paradigm. Furthermore, the paper summarizes key benchmark datasets and evaluation metrics, and discusses directions to improve the robustness, security, and effectiveness of the existing Text-to-SQL systems.

  • REVIEW ARTICLE
    Hongxing FAN , Haohua CHEN , Zehuan HUANG , Ziwei LIU , Lu SHENG

    Generating high-quality 3D assets is a fundamental challenge in computer vision and graphics. While the field has progressed significantly from early VAE/GAN approaches through diffusion models and large reconstruction models, persistent limitations hinder widespread application. Specifically, achieving high geometric and appearance fidelity, intuitive user control, versatile multi-modal conditioning, and directly usable outputs (e.g., structured meshes) remains challenging for established paradigms. This paper surveys the evolution of deep generative models for 3D content creation, with a primary focus on emerging paradigms: autoregressive (AR) generation and Agent-driven approaches, poised to address aforementioned shortcomings. AR models generate assets sequentially (e.g., token-by-token or part-by-part), offering inherent potential for finer control, structured outputs, and integrating user guidance during the step-by-step process. Agent-driven methods, conversely, leverage the reasoning and linguistic capabilities of Large Language Models (LLMs), enabling intuitive and flexible 3D creation by decomposing complex tasks and utilizing external tools through multi-agent systems. We provide a comprehensive overview of these novel techniques, discuss their potential advantages over current methods, and outline key challenges and future directions towards more capable and intelligent 3D generation systems.

  • LETTER
    Changchun WU , Xueqin XIE , Ziru HUANG , Hao LIN , Jian HUANG
  • LETTER
    Naming LIU , Mingzhi WANG , Xihuai WANG , Weinan ZHANG , Yaodong YANG , Youzhi ZHANG , Bo AN , Ying WEN
  • REVIEW ARTICLE
    Libo QIN , Qiguang CHEN , Xiachong FENG , Yang WU , Yongheng ZHANG , Yinghui LI , Min LI , Wanxiang CHE , Philip S. YU

    While large language models (LLMs) like ChatGPT have shown impressive capabilities in Natural Language Processing (NLP) tasks, a systematic investigation of their potential in this field remains largely unexplored. This study aims to address this gap by exploring the following questions. (1) How are LLMs currently applied to NLP tasks in the literature? (2) Have traditional NLP tasks already been solved with LLMs? (3) What is the future of the LLMs for NLP? To answer these questions, we take the first step to provide a comprehensive overview of LLMs in NLP. Specifically, we first introduce a unified taxonomy including (1) parameter-frozen paradigm and (2) parameter-tuning paradigm to offer a unified perspective for understanding the current progress of LLMs in NLP. Furthermore, we summarize the new frontiers and the corresponding challenges, aiming to inspire further groundbreaking advancements. We hope this work offers valuable insights into {the potential and limitations} of LLMs, while also serving as a practical guide for building effective LLMs in NLP.

Publishing model
1

{"submissionFirstDecision":"40","jcrJfStr":"4.6 (2024)","editorEmail":"zhangdf@hep.com.cn"}

Downloads

{"submissionFirstDecision":"40","jcrJfStr":"4.6 (2024)","editorEmail":"zhangdf@hep.com.cn"}
Monthly

ISSN 2095-2228 (Print)
ISSN 2095-2236 (Online)
CN 10-1014/TP