2026-11-15 2026, Volume 20 Issue 11

  • Select all
  • REVIEW ARTICLE
    Hu DING , Pengxiang HUA , Zhen HUANG

    The development of artificial intelligence (AI) techniques has brought revolutionary changes across various realms. In particular, the use of AI-assisted methods to accelerate chemical research has become a popular and rapidly growing trend, leading to numerous groundbreaking works. In this paper, we provide a comprehensive review of current AI techniques in chemistry from a computational perspective, considering various aspects in the design of methods. We begin by discussing the characteristics of data from diverse sources, followed by an overview of various representation methods. Next, we review existing models for several topical tasks in the field, and conclude by highlighting some key challenges that warrant further attention.

  • REVIEW ARTICLE
    Kaiyuan TIAN , Linbo QIAO , Baihui LIU , Gongqingjian JIANG , Shanshan LI , Dongsheng LI

    Scientific research faces high costs and inefficiencies with traditional methods, but the rise of deep learning and large language models (LLMs) offers innovative solutions. This survey reviews transformer-based LLM applications across scientific fields such as biology, medicine, chemistry, and meteorology, underscoring their role in advancing research. However, the continuous expansion of model size has led to significant memory demands, hindering further development and application of LLMs for science. This survey systematically reviews and categorizes memory-efficient pre-training techniques for large-scale transformers, including algorithm-level, system-level, and hardware-software co-optimization. Taking AlphaFold 2 as an example, we demonstrate how tailored memory optimization methods reduce storage needs while preserving prediction accuracy. By bridging model efficiency and scientific application needs, we hope to provide insights for scalable and cost-effective LLM training in AI for science.

  • REVIEW ARTICLE
    Kaiyuan LIAO , Xiwei XUAN , Kwan-Liu MA

    Time series forecasting plays a critical role in numerous real-world applications, such as finance, healthcare, transportation, and scientific computing. In recent years, deep learning has become a powerful tool for modeling complex temporal patterns and improving forecasting accuracy. This survey provides an overview of recent deep learning approaches for time series forecasting, involving various architectures including RNNs, CNNs, GNNs, transformers, large language models, MLP-based models, and diffusion models. We first identify key challenges in the field, such as temporal dependency, efficiency, and cross-variable dependency, which drive the development of forecasting techniques. Then, the general advantages and limitations of each architecture are discussed to contextualize their adaptation in time series forecasting. Furthermore, we highlight promising design trends like multi-scale modeling, decomposition, and frequency-domain techniques, which are shaping the future of the field. This paper serves as a compact reference for researchers and practitioners seeking to understand the current landscape and future trajectory of deep learning in time series forecasting.

  • LETTER
    Yu ZHANG , Jing CHEN , Jiwei JIN , Feifei MA
  • REVIEW ARTICLE
    Libo QIN , Qiguang CHEN , Xiachong FENG , Yang WU , Yongheng ZHANG , Yinghui LI , Min LI , Wanxiang CHE , Philip S. YU

    While large language models (LLMs) like ChatGPT have shown impressive capabilities in Natural Language Processing (NLP) tasks, a systematic investigation of their potential in this field remains largely unexplored. This study aims to address this gap by exploring the following questions. (1) How are LLMs currently applied to NLP tasks in the literature? (2) Have traditional NLP tasks already been solved with LLMs? (3) What is the future of the LLMs for NLP? To answer these questions, we take the first step to provide a comprehensive overview of LLMs in NLP. Specifically, we first introduce a unified taxonomy including (1) parameter-frozen paradigm and (2) parameter-tuning paradigm to offer a unified perspective for understanding the current progress of LLMs in NLP. Furthermore, we summarize the new frontiers and the corresponding challenges, aiming to inspire further groundbreaking advancements. We hope this work offers valuable insights into {the potential and limitations} of LLMs, while also serving as a practical guide for building effective LLMs in NLP.

Publishing model
1

{"submissionFirstDecision":"40","jcrJfStr":"4.6 (2024)","editorEmail":"zhangdf@hep.com.cn"}

Downloads

{"submissionFirstDecision":"40","jcrJfStr":"4.6 (2024)","editorEmail":"zhangdf@hep.com.cn"}
Monthly

ISSN 2095-2228 (Print)
ISSN 2095-2236 (Online)
CN 10-1014/TP