Dec 2025, Volume 19 Issue 12
    

  • Select all
    Excellent Young Computer Scientists Forum
  • REVIEW ARTICLE
    Zhi-Min WANG, Mao-Hang RAO, Shang-Hua YE, Wei-Tao SONG, Feng LU

    With the widespread adoption of Extended Reality (XR) headsets, spatial computing technologies are gaining increasing attention. Spatial computing enables interaction with virtual elements through natural input methods such as eye tracking, hand gestures, and voice commands, thus placing natural human-computer interaction at its core. While previous surveys have reviewed conventional XR interaction techniques, recent advancements in natural interaction, particularly driven by artificial intelligence (AI) and large language models (LLMs), have introduced new paradigms and technologies. In this paper, we review research on multimodal natural interaction for wearable XR, focusing on papers published since 2022 in six top venues: ACM CHI, UIST, IMWUT (Ubicomp), IEEE VR, ISMAR, and TVCG. We classify and analyze these studies based on application scenarios, operation types, and interaction modalities. This analysis provides a structured framework for understanding how researchers are designing advanced natural interaction techniques in XR. Based on these findings, we discuss the challenges in natural interaction techniques and suggest potential directions for future research. This review provides valuable insights for researchers aiming to design natural and efficient interaction systems for XR, ultimately contributing to the advancement of spatial computing.

  • Artificial Intelligence
  • RESEARCH ARTICLE
    Zhiwei SUN, Jun BAI, Zhuofan CHEN, Chen LI, Wenge RONG, Zhang XIONG

    Text classification is a pivotal task in natural language understanding, and its performance has seen remarkable advancements with the rise of Pre-trained Language Models (PLMs). Recently, the proliferation of PLMs has made it increasingly challenging to choose the most suitable model for a given dataset. Since fine-tuning the sheer number of models is impractical, Transferability Estimation (TE) has become a promising solution to efficient model selection. Unlike current TE methods that focus solely on fixed and hard class assignments to evaluate the quality of model-encoded features, our approach further takes into account the inter-sample and inter-model variations represented by soft class assignments. We achieve this by utilizing class embeddings to predict posterior class assignments, with the logarithm of the maximum posterior evidence serving as the transferability score. Moreover, we found that the informative sub-space of the dataset can lead to more accurate calculation of soft class assignments, where we achieve efficient annotation of informative samples by eliciting the powerful judging ability of large language model. The resulting posterior evidence over the informative sub-space, LogIPE, enables us to capture subtle differences between models, enhancing the accuracy of model selection and validated by extensive experiments conducted on a wide range of text classification datasets as well as candidate PLMs.

  • LETTER
    Yan ZHUANG, Huiwen WANG, Shuai MA, Yang LIU
  • LETTER
    Bin-Bin JIA, Jun-Ying LIU, Min-Ling ZHANG
  • LETTER
    Cong WANG, Zhilong MI, Ziqiao YIN, Binghui GUO
  • LETTER
    Qiang WANG, Kele XU, Dawei FENG, Bo DING, Huaimin WANG
  • LETTER
    Zhiqing CUI, Fan MENG, Jingjia LUO
  • Theoretical Computer Science
  • RESEARCH ARTICLE
    Ke XU, Guangyan ZHOU

    In this paper, we identify the distinction between non-brute-force computation and brute-force computation as the most fundamental problem in computer science. Subsequently, we prove, by the diagonalization method, that constructed self-referential CSPs cannot be solved by non-brute-force computation, which is stronger than P NP. This constructive method for proving impossibility results is very different (and missing) from existing approaches in computational complexity theory, but aligns with Gödel’s technique for proving logical impossibility. Just as Gödel showed that proving formal unprovability is feasible in mathematics, our results show that proving computational hardness is not hard in mathematics. Specifically, proving lower bounds for many problems, such as 3-SAT, can be challenging because these problems have various effective strategies available to avoid exhaustive search. However, for self-referential examples that are extremely hard, exhaustive search becomes unavoidable, making its necessity easier to prove. Consequently, it renders the separation between non-brute-force computation and brute-force computation much simpler than that between P and NP. Finally, our results are akin to Gödel’s incompleteness theorem, as they reveal the limits of reasoning and highlight the intrinsic distinction between syntax and semantics.

  • Information Systems
  • REVIEW ARTICLE
    Shuyue WEI, Yongxin TONG, Zimu ZHOU, Yi XU, Jingkai GAO, Tongyu WEI, Tianran HE, Weifeng LV

    Reasoning has long been regarded as a distinctive hallmark of human cognition, and recent advances in the artificial intelligence community have increasingly focused on the reasoning large language models (rLLMs). However, due to strict privacy regulations, the domain-specific reasoning knowledge is often distributed across multiple data owners, limiting the rLLM’s ability to fully leverage such valuable resources. In this context, federated learning (FL) has gained increasing attention in both the academia and industry as a promising privacy-preserving paradigm for addressing the challenges in the data-efficient training of rLLMs.

    In this paper, we conduct a comprehensive survey on federated rLLMs and propose a novel taxonomy based on training signals, including training signals derived from raw data, learned representations, and preference feedback. For each category, we emphasize the emerging trends according to how to use FL to enhance reasoning capabilities of rLLMs considering the model effectiveness, communication cost and privacy preservation. Finally, we envision future research directions and challenges based on insights from existing studies.

  • REVIEW ARTICLE
    Shao-Jie QIAO, Han-Lin FAN, Nan HAN, Lan DU, Yu-Han PENG, Rong-Min TANG, Xiao QIN

    Artificial intelligence-enabled database technology, known as AI4DB (Artificial Intelligence for Databases), is an active research area attracting significant attention and innovation. This survey first introduces the background of learning-based database techniques. It then reviews advanced query optimization methods for learning databases, focusing on four popular directions: cardinality/cost estimation, learning-based join order selection, learning-based end-to-end optimizers, and text-to-SQL models. Cardinality/cost estimation is classified into supervised and unsupervised methods based on learning models, with illustrative examples provided to explain the working mechanisms. Detailed descriptions of various query optimizers are also given to elucidate the working mechanisms of each component in learning query optimizers. Additionally, we discuss the challenges and development opportunities of learning query optimizers. The survey further explores text-to-SQL models, a new research area within AI4DB. Finally, we consider the future development prospects of learning databases.