2026-04-15 2026, Volume 27 Issue 4
  • Select all
  • Review
    Hongqi MIN , Dingbang YANG , Chenhao QI , Yong ZENG

    With the rapid development of the low-altitude economy, low-altitude unmanned aerial vehicle (UAV) swarms are emerging as important components of sixth-generation (6G) mobile communication networks, facilitating “full coverage” and “Internet of Intelligence.” Integrated sensing and communication (ISAC) deeply integrates sensing functionality into wireless communication networks by sharing wireless infrastructures and resources such as base stations, antennas, radio frequency chains, and signal waveforms, thereby significantly improving the performance of low-altitude UAV swarms. This paper reviews the research status of low-altitude UAV swarm ISAC systems, analyzes the new challenges arising from key features of UAV swarms, including low-slow-small characteristics, high density, large quantity, complex low-altitude environments, and high swarm coordination requirements, presents a vision for future deployment, and proposes the so-called “Ten Ones” performance metrics tailored to low-altitude UAV swarm ISAC. To realize these ambitious key performance indicators for future UAV swarm ISAC, several promising technologies are discussed, such as new array architectures, including extremely-large multiple-input multiple-output (XL-MIMO), sparse XL-MIMO, and reconfigurable antenna arrays, sparse time-frequency resource allocation, and channel knowledge maps. Furthermore, the potential of exploiting UAV swarms as airborne ISAC platforms is discussed. Finally, future research directions are outlined, offering a guideline for the design and development of low-altitude UAV swarm ISAC systems.

  • Research Article
    Jiaxuan DU , Hao WU , Qing MA , Guohui TIAN , Zhixian ZHAO , Shuwen LENG

    The location where a robot grasps an object is closely related to the task type. For the same object, different user requirements may necessitate different grasping strategies. Visual affordance serves as a reliable source of prior knowledge for manipulation. Existing methods learn affordance from images or videos, but planar affordance lacks the spatial information required for 6-degree-of-freedom (6-DoF) manipulation. Furthermore, current approaches are limited to affordances associated with predefined categories and cannot directly infer affordances from user instructions. To address such limitations, we propose a novel task: instruction-driven three-dimensional (3D) object affordance segmentation. To support this research, we introduce an instruction-affordance dataset (IAD), a challenging dataset consisting of 7190 object instances across 20 common object categories, paired with 624 manipulation instructions that specify the corresponding affordances. To evaluate generalization to novel commands, our dataset includes both seen and unseen settings. Building on this, we design an instruction-driven 3D affordance segmentation (IDAS) network, which extracts point cloud features and integrates instruction features layer by layer. Given a user instruction, our method segments suggested manipulation regions on the object’s point cloud, thereby guiding the selection of optimal grasp poses. Experimental results show that our method outperforms other related approaches under both seen and unseen settings, demonstrating generalization ability to diverse user commands and unknown affordances.

  • Research Article
    Binglong LI , Shilong YU , Yong ZHAO , Yifeng SUN , Chaowen CHANG , Qingxian WANG

    The recovery of evidence from fragmented image files is a prominent research focus in the field of file carving. To address image fragment reassembly, this paper analyzes the Joint Photographic Experts Group (JPEG) image structure and proposes a fragment connection weighting algorithm based on discrete cosine transform (DCT) semantic features, along with a weight adjustment factor that leverages image compression characteristics. By integrating these components, the algorithm effectively determines the fragment sequence in JPEG files, and a practical carving algorithm is designed. Experiments conducted on disk and memory demonstrate that the adjustment factor-based algorithm outperforms the DCT-only method in identifying true best matches (reducing false positives). Disk experiments achieve an average carving precision of 94.4%, surpassing existing methods, while memory experiments validate the feasibility of the approach, along with a theoretical analysis of failure scenarios caused by interference from software such as Windows Photo Viewer.

  • Research Article
    Xinyue ZHANG , Kunyi LAI , Xin TANG

    Reversible data hiding in the encrypted domain (RDH-ED) based on homomorphic encryption provides a promising approach for privacy-preserving data sharing, yet existing methods based on the N th-degree truncated polynomial ring unit (NTRU) face a fundamental conflict between embedding capacity and reversibility, often requiring preprocessing of plaintext, which in turn compromises randomness of the ciphertext obtained. To address these issues, a novel RDH-ED scheme integrating the Chinese remainder theorem (CRT) with the NTRU cryptosystem is proposed in this study. The proposed scheme operates without any preprocessing of the plaintext and constructs multichannel redundancy in the ciphertext domain, thereby fully preserving the original polynomial structure of the plaintext. By employing a CRT-based encoding, multiple bits of information are enabled to be carried by a single polynomial coefficient, achieving an embedding capacity of 503 bits per polynomial with moderate-sized parameters. Moreover, the embedded data can be extracted before decryption via pre-negotiated coprime parameters, offering greater operational flexibility. Rigorous mathematical constraints ensure that the redundancy term is automatically eliminated during decryption, thereby guaranteeing lossless recovery of the original content. Experimental results demonstrate that the proposed scheme achieves a substantially higher embedding capacity compared to predominant RDH-ED methods based on NTRU, Paillier, and ElGamal cryptosystems, without compromising security or efficiency.

  • Research Article
    Wenbo ZHANG , Bo DING , Shuai WEI , Qinrang LIU , Hong YU , Ke SONG , Wei GUO , Bo MEI , Rui ZHENG

    In recent years, mature advanced packaging technologies have increasingly enabled the integration of multiple small dies into larger chips, while retaining chip-scale density and high-bandwidth interconnects. To address the inefficiencies of manual design and the challenges of heterogeneous optimization in wafer-scale chip (WSC) development, we systematically explore key factors in WSC architecture design. We integrate chip layout, operator mapping, and hardware-software codesign, and formulate the WSC architecture exploration problem as a multi-objective optimization task. First, we establish a hierarchical architecture model for WSCs, unifying the quantification of core constraints and interconnect topology constraints; second, we propose a hierarchical multi-objective collaborative optimization framework to jointly optimize physical constraints and task mapping communication patterns; finally, we develop a WSC optimizer toolchain that supports mixed-granularity simulation and generates optimal configurations for representative workloads. Experimental results demonstrate that compared with traditional computer architectures, the optimized architectures generated by our WSC optimizer achieve up to a 22× throughput improvement and a 5× latency reduction in application domains, such as cryptographic decryption and signal processing.

  • Research Article
    Jiajia JIAO , Yixu YU

    Large language models (LLMs) have exhibited outstanding performance across a wide range of natural language processing (NLP) tasks. However, the rising prevalence of hardware transient faults has made silent data corruptions (SDCs) in LLMs increasingly problematic, severely degrading output quality and user experience. State-of-the-art protection schemes primarily rely on hardware-assisted algorithm-based fault tolerance (ABFT) or boundary-setting-driven online fault tolerance (FT2) for selective layers, yet these solutions suffer from strict hardware dependencies, substantial overhead, or incomplete coverage. To address these limitations, we propose RetryTrigger, a novel hardware-free fault-aware inference methodology capable of handling all potential faults. During LLM inference, RetryTrigger dynamically collects runtime output features (e.g., maximum probability, top-k probability gaps, output entropy, logits statistics, and inference latency), which are used to train a LightGBM meta-model. This meta-model accurately predicts whether duplicate inference should be performed, thereby effectively mitigating faults while preserving efficiency without additional hardware dependence. Extensive experiments on seven representative LLMs (including T5-Small, RoBERTa, BioMedBERT, Qwen2.5-Coder-0.5B/7B, MiniMind, and Opt) demonstrate that RetryTrigger reduces SDC rates by up to 95.33% (on average 92.97%) and achieves a minimal performance overhead of 2.4012% (on average 4.1167%), offering a superior balance between reliability and efficiency compared to state-of-the-art solutions.