2026-04-15 2026, Volume 20 Issue 4

  • Select all
  • LETTER
    Keji HAN , Yao GE , Yun LI
  • LETTER
    Hao ZHOU , Yexuan SHI , Yuxiang ZENG , Yongxin TONG
  • LETTER
    Haowei LIU , Ye WANG , Xiaotong JIANG , Zhongqing WANG , Guodong ZHOU
  • REVIEW ARTICLE
    Chen-Yang ZHU , Xin-Yao LIU , Kai XU , Ren-Jiao YI

    In recent years, 3D editing has become a significant research topic, primarily due to its ability to manipulate 3D assets in ways that fulfill the growing demand for personalized customization. The advent of radiance field-based methods, exemplified by pioneering frameworks such as Neural Radiance Fields (NeRF) and 3D Gaussian Splatting (3DGS), represents a pivotal innovation in scene representation and novel view synthesis, greatly enhancing the effectiveness and efficiency of 3D editing. This survey provides a comprehensive overview of the current advancements in 3D editing based on NeRF and 3DGS, systematically categorizing existing methods according to specific editing tasks while analyzing the current challenges and potential research directions. Our goal through this survey is to offer a comprehensive and valuable resource for researchers in the field, encouraging innovative ideas that may drive further progress in 3D editing.

  • RESEARCH ARTICLE
    Dan HAN , Jie ZHANG , Shiguang SHAN

    Body height and weight estimation from a single non-frontal face image suffers from poor performance due to large face pose variance and lack of labeled data. In this paper, we propose a face-based body height and weight estimation method that leverages auxiliary tasks and pose disentanglement to address these issues. Specifically, inspired by the relevance of gender, age, height and weight estimation tasks, we employ gender and age estimation as auxiliary tasks to improve the performance of primary tasks, i.e., height and weight estimation. Besides, we remove the pose-relevant feature from input to further promote the performance of both primary tasks and auxiliary tasks. Extensive experiments are conducted on both small- and large-pose datasets, demonstrating the superiority of the proposed method.

  • RESEARCH ARTICLE
    Kangkang SHI , Jiongjiong REN , Shaozhen CHEN

    As a family of tweakable block ciphers, HALFLOOP is standardized in the interoperability and performance standards for medium and high-frequency radio systems published by the United States Department of Defense. Although HALFLOOP-24 has been destroyed in real-world practical attacks, seeking stronger attacks from the structure of ciphers against two larger variants of HALFLOOP is to be further explored. Since HALFLOOP has a property of smaller internal states compared to master keys, it leads to a low diffusion in the key schedule. Considering that related-key boomerang attacks have a significant effect on such ciphers and can even achieve full-round attacks, we evaluate the resistance of two larger variants of HALFLOOP against related-key boomerang attacks in the paper. First, we propose a more efficient model to search for sandwich distinguishers of ciphers with non-linear key schedules. Specifically, we derive more constraints rather than simple relationships in the internal linear layer to further restrict the appropriate distinguishers into a smaller space. In addition, we utilize the ladder switch effect in the related-key model to guarantee the differential transition with probability one among the master key quartet, thereby avoiding possible weak-key attacks or invalid trails. Second, applying the model to HALFLOOP, we propose a full-round related-key boomerang attack on HALFLOOP-48 and nearly full-round related-key attacks on HALFLOOP-96. The relevant results demonstrate that the security of two larger variants of HALFLOOP is weak in related-key scenario. Therefore, in addition to the serious flaw brought by the tweak, the low diffusion in the key schedule algorithm is also worthy of attention.

  • RESEARCH ARTICLE
    Min HAN , Peng XU , Willy SUSILO , Wei WANG

    Public-key encryption with keyword search (PEKS) is a well-known method for privacy-preserving keyword search in encrypted email systems due to its public-key characteristics. However, we have observed that even without a keyword-search trapdoor, traditional PEKS allows the server to distinguish ciphertexts effectively, compromising semantic security. To address this limitation, we introduce dynamic searchable public-key encryption (DSPE), a concept that conceals relationships between searchable ciphertexts and their corresponding encrypted files, ensuring semantic security in both theory and practice. DSPE also enables the server to delete specific ciphertexts as requested by the receiver. We present a DSPE instance with provable semantic security in the random oracle model, which offers the advantage of sublinear complexity in identifying matching ciphertexts and deleting intended ones. Through experimental validation, we demonstrate the feasibility of this instance. Furthermore, we construct a DSPE-based cloud email system in the double-cloud model and evaluate its performance.

  • RESEARCH ARTICLE
    Li LIU , Puwen WEI , Shuchang LIU , Zirui WANG , Da HU , Zengjie KOU

    With the rising volume of transactions on blockchains, signature verification becomes a critical bottleneck of efficiency, hindering scalability and performance. This paper presents a general approach to batch verification of arbitrary signatures on blockchain. By leveraging the memory-friendliness of incremental verifiable computation (IVC) and optimizing for blockchain environments, the proposed scheme can enhance scalability, reduce memory consumption, and ensure compatibility with common devices while supporting an arbitrary number of signature verifications. This approach allows for the concurrent generation of IVC proofs while receiving signatures from other nodes, making it particularly well-suited for low-latency blockchain applications. As a concrete instantiation of our approach, we introduce BEATS (Batch ECDSA Transaction verification Scheme), where the underlying SNARK is instantiated by Spartan with Bulletproof commitment. Our implementation, evaluated on a virtual machine with 8 cores and 16 GB RAM, shows significant performance gains compared to SpartanBP, which is the direct construction using Spartan with Bulletproof commitment to verify a batch of ECDSA. The comparison shows that BEATS speeds up the prover by 3–7 times and the verifier by 48–240 times when handling up to 211 ECDSA signatures, the maximum batch size supported by SpartanBP. For larger batches exceeding 210, our scheme outperforms the baseline approach, which verifies ECDSA signatures one by one without any proof system. Our verifier achieved a speedup of 21–174 times compared to the baseline as the batch size grows to 220. Furthermore, BEATS exhibits a remarkably low memory footprint, with peak memory usage remaining below 1 GB.

  • RESEARCH ARTICLE
    Jizhe JIA , Meng SHEN , Qingjun YUAN , Yong LIU , Jing WANG , Jian KONG , Liang HUANG , Haotian HE , Liehuang ZHU

    Network traffic encryption techniques are widely adopted to protect data confidentiality and prevent privacy leakage during data transmission. However, malware often leverages these traffic encryption techniques to conceal malicious activities. Recent research has demonstrated the effectiveness of machine and deep learning-based malware traffic detection methods. However, these methods rely on a sufficient amount of labeled data readily available for model training, limiting the capability of transferring to new malware detection.

    In this paper, we propose Malcom, an adaptive encrypted malware traffic detection method based on fully convolutional masked autoencoders to detect malware traffic hidden in the encrypted traffic. We first propose a novel traffic representation named Header-Payload Matrix (HPM) to extract discriminative features that can differentiate from malware and benign traffic. Subsequently, we develop a hierarchical ConvNeXt traffic encoder and a lightweight ConvNeXt traffic decoder to learn high-level features from a large amount of unlabeled data. The masked autoencoder framework enables our model to be adaptive to new malware detection by fine-tuning with only a few labeled data. We conduct extensive experiments with real-world datasets to evaluate Malcom. The results demonstrate that Malcom outperforms the state-of-the-art (SOTA) methods in two typical scenarios. Particularly, in the scenario of few-shot learning, Malcom achieves an average F1 score of 97.35%, with an improvement of 8.24% over the SOTA method, by fine-tuning with only 10 samples per malware type.

  • RESEARCH ARTICLE
    Yan-Qing YAO , Yun-Jia ZHANG , Zhi-Yi LIU , Yu-Xuan WANG , Xin-Yu TAN , Zhengde ZHAI

    Nowadays, vast and rapidly growing information acts as digital records of social activities and is widely collected and stored as economic assets. To reduce the difficulty and local data management’s cost significantly, cloud storage services provide a highly available, high-performance, and low-cost solution for user data hosting, enabling remote access, backup, and sharing of data stored by the cloud. However, this service model is not without security risks, including user privacy exposure, low trustworthiness of data, and unauthorized access. To address these concerns, attribute-based encryption (ABE) schemes allow for the implementation of fine-grained access policies while ensure the confidentiality and availability of data stored under the cloud environment. The issues of collusion among authorities, excessive decryption computation overhead, and high complexity in attribute revocation have aroused many researchers’ attention, and many works have emerged. However, expanding the functionality of ABE schemes to satisfy multiple requirements and improving existing functionality of ABE schemes are still urgent problems to be solved. Motivated by these problems, here we propose a novel multi-functional multi-authority ABE scheme that incorporates functional features such as multi-authority key generation, outsourced decryption, malicious user tracking, flexible attribute revocation, and real-time policy updates, thereby providing fine-grained access control as well as confidentiality for data stored under cloud environments. Similar to prior works, we have analyzed the static security, forward security, and resistance to collusion attacks of our proposed scheme for completeness. Storage and computational efficiency evaluation shows that our proposed scheme achieves lower storage costs and computational overhead compared to existing schemes with similar functionalities.

  • REVIEW ARTICLE
    Yu PENG , Qi FENG , De-Biao HE , Min LUO

    Threshold signature, as a privacy-preserving distributed signature, has become the underlying technology in various fields over the last decade. It is useful to protect against a single point of failure and can effectively ensure key security. In recent years, many different digital signatures have been thresholded and many new techniques, algorithms, and protocols have been proposed. This paper introduces the mainstream threshold signature schemes based on the signatures by several standards. We comprehensively investigate various aspects of these threshold signature schemes for comparison and evaluation, and provide the relevant applications and more potential directions for threshold signature.

  • RESEARCH ARTICLE
    Xuguo WANG , Diming ZHANG , Chenglin LI , Xuan JIANG , Ligeng CHEN

    As malware techniques evolve, threat actors continuously refine their code with evasion and anti-analysis strategies, making sandbox-based cyber threat intelligence (CTI) data collection essential for analyzing malicious behaviors. However, no prior research has systematically examined the relationship between execution time and intelligence data completeness, nor its impact on intelligence data fidelity. Existing sandbox configurations typically rely on predefined execution time thresholds without empirical justification, potentially leading to premature termination of critical behaviors or excessive computational overhead. To address this gap, we analyze malware execution dynamics through system calls, code execution, and data entry access patterns mapped within the MITRE ATT&CK framework. Leveraging Extreme Value Theory (EVT), we model the probabilistic distribution of intelligence data extraction over time, enabling us to estimate the likelihood of acquiring additional intelligence data as execution progresses. Our analysis reveals that the probability of obtaining new intelligence data decreases with time. Specifically, at a 95% confidence level, the probability of acquiring additional intelligence data after three minutes is 0.092, and after five minutes is 0.074, indicating a diminishing rate of intelligence extraction over extended execution periods. These findings indicate that extending execution time beyond a specific threshold provides limited additional intelligence data, highlighting the importance of determining an optimal execution time. By introducing an empirical framework for optimizing sandbox execution time in intelligence data extraction, we introduce a quantitative and principled execution model, providing a scientifically grounded methodology for malware analysis. Our findings provide a foundation for future research in adaptive threat intelligence data collection, enabling a data-driven approach to execution time selection in large-scale security operations.

  • LETTER
    Yixin XIANG , Keyu LIU , Leqi WANG , Jianyu ZHOU
Publishing model
1

{"submissionFirstDecision":"40","jcrJfStr":"4.6 (2024)","editorEmail":"zhangdf@hep.com.cn"}

Downloads

{"submissionFirstDecision":"40","jcrJfStr":"4.6 (2024)","editorEmail":"zhangdf@hep.com.cn"}
Monthly

ISSN 2095-2228 (Print)
ISSN 2095-2236 (Online)
CN 10-1014/TP