2025-12-01 2025, Volume 5 Issue 4

  • Select all
  • research-article
    Lin Li, Shiye Wang, Changsheng Li, Ye Yuan, Guoren Wang

    Continual learning, characterized by the sequential acquisition of multiple tasks, has emerged as a prominent challenge in deep learning. During the process of continual learning, deep neural networks experience a phenomenon known as catastrophic forgetting, wherein networks lose the acquired knowledge related to previous tasks when training on new tasks. Recently, parameter-efficient fine-tuning (PEFT) methods have gained prominence in tackling the challenge of catastrophic forgetting. However, within the realm of domain incremental learning, a type characteristic of continual learning, there exists an additional overlooked inductive bias, which warrants attention beyond existing approaches. In this paper, we propose a novel PEFT method called Domain Correlation Low-Rank Adaptation for domain incremental learning. Our approach put forward a domain correlated loss, which encourages the weights of the LoRA module for adjacent tasks to become more similar, thereby leveraging the correlation between different task domains. Furthermore, we consolidate the classifiers of different task domains to improve prediction performance by capitalizing on the knowledge acquired from diverse tasks. To validate the effectiveness of our method, we conduct comparative experiments and ablation studies on publicly available domain incremental learning benchmark dataset. The experimental results demonstrate that our method outperforms state-of-the-art approaches.

  • research-article
    Qasim Zia, Saide Zhu, Haoxin Wang, Zafar Iqbal, Yingshu Li

    In recent research on the Digital Twin-based Vehicular Ad hoc Network (DT-VANET), Federated Learning (FL) has shown its ability to provide data privacy. However, Federated learning struggles to adequately train a global model when confronted with data heterogeneity and data sparsity among vehicles, which ensure suboptimal accuracy in making precise predictions for different vehicle types. To address these challenges, this paper combines Federated Transfer Learning (FTL) to conduct vehicle clustering related to types of vehicles and proposes a novel Hierarchical Federated Transfer Learning (HFTL). We construct a framework for DT-VANET, along with two algorithms designed for cloud server model updates and intra-cluster federated transfer learning, to improve the accuracy of the global model. In addition, we developed a data quality score-based mechanism to prevent the global model from being affected by malicious vehicles. Lastly, detailed experiments on real-world datasets are conducted, considering different performance metrics that verify the effectiveness and efficiency of our algorithm.

  • research-article
    Na Li, Hangguan Shan, Meiyan Song, Yong Zhou, Zhongyuan Zhao, Howard H. Yang, Fen Hou

    Federated learning (FL) with synchronous model aggregation suffers from the straggler issue because of heterogeneous transmission and computation delays among different agents. In mobile wireless networks, this issue is exacerbated by time-varying network topology due to agent mobility. Although asynchronous FL can alleviate straggler issues, it still faces critical challenges in terms of algorithm design and convergence analysis because of dynamic information update delay (IU-Delay) and dynamic network topology. To tackle these challenges, we propose a decentralized FL framework based on gradient descent with momentum, named decentralized momentum federated learning (DMFL). We prove that DMFL is globally convergent on convex loss functions under the bounded time-varying IU-Delay, as long as the network topology is uniformly jointly strongly connected. Moreover, DMFL does not impose any restrictions on the data distribution over agents. Extensive experiments are conducted to verify DMFL’s performance superiority over the benchmarks and to reveal the effects of diverse parameters on the performance of the proposed algorithm.

  • research-article
    Ao Xiong, Chenbin Qiao, Wenjing Li, Dong Wang, Da Li, Bo Gao, Weixian Wang

    Anomaly detection in blockchain transactions faces several challenges, the most prominent being the imbalance between positive and negative samples. Most transaction data are normal, with only a small fraction of anomalous data. Additionally, blockchain transaction datasets tend to be small and often incomplete, which complicates the process of anomaly detection. When using simple AI models, selecting the appropriate model and tuning parameters becomes difficult, resulting in poor performance. To address these issues, this paper proposes GANAnomaly, an anomaly detection model based on Generative Adversarial Networks (GANs) and Autoencoders. The model consists of three components: a data generation model, an encoding model, and a detection model. Firstly, the Wasserstein GAN (WGAN) is employed as the data generation model. The generated data is then used to train an encoding model that performs feature extraction and dimensionality reduction. Finally, the trained encoder serves as the feature extractor for the detection model. This approach leverages GANs to mitigate the challenges of low data volume and data imbalance, while the encoder extracts relevant features and reduces dimensionality. Experimental results demonstrate that the proposed anomaly detection model outperforms traditional methods by more accurately identifying anomalous blockchain transactions, reducing the false positive rate, and improving both accuracy and efficiency.

  • research-article
    Quntao Zhu, Mengfan Li, Yuanjun Gao, Yao Wan, Xuanhua Shi, Hai Jin

    Knowledge graph (KG) representation learning aims to map entities and relations into a low-dimensional representation space, showing significant potential in many tasks. Existing approaches follow two categories: (1) Graph-based approaches encode KG elements into vectors using structural score functions. (2) Text-based approaches embed text descriptions of entities and relations via pre-trained language models (PLMs), further fine-tuned with triples. We argue that graph-based approaches struggle with sparse data, while text-based approaches face challenges with complex relations. To address these limitations, we propose a unified Text-Augmented Attention-based Recurrent Network, bridging the gap between graph and natural language. Specifically, we employ a graph attention network based on local influence weights to model local structural information and utilize a PLM based prompt learning to learn textual information, enhanced by a mask-reconstruction strategy based on global influence weights and textual contrastive learning for improved robustness and generalizability. Besides, to effectively model multi-hop relations, we propose a novel semantic-depth guided path extraction algorithm and integrate cross-attention layers into recurrent neural networks to facilitate learning the long-term relation dependency and offer an adaptive attention mechanism for varied-length information. Extensive experiments demonstrate that our model exhibits superiority over existing models across KG completion and question-answering tasks.

  • research-article
    Yongdan Wang, Haibin Zhang, Baohan Huang, Zhijun Lin, Chuan Pang

    The stock market is a vital component of the financial sector. Due to the inherent uncertainty and volatility of the stock market, stock price prediction has always been both intriguing and challenging. To improve the accuracy of stock predictions, we construct a model that integrates investor sentiment with Long Short-Term Memory (LSTM) networks. By extracting sentiment data from the “Financial Post” and quantifying it with the Vader sentiment lexicon, we add a sentiment index to improve stock price forecasting. We combine sentiment factors with traditional trading indicators, making predictions more accurate. Furthermore, we deploy our system on the blockchain to enhance data security, reduce the risk of malicious attacks, and improve system robustness. This integration of sentiment analysis and blockchain offers a novel approach to stock market predictions, providing secure and reliable decision support for investors and financial institutions. We deploy our system and demonstrate that our system is both efficient and practical. For 312 bytes of stock data, we achieve a latency of 434.42 ms with one node and 565.69 ms with five nodes. For 1700 bytes of sentiment data, we achieve a latency of 1405.25 ms with one node and 1750.25 ms with five nodes.

  • research-article
    Zulfiqar Ali Khan, Izzatdin Abdul Aziz

    Cloud computing has been the core infrastructure for providing services to the offloaded workloads from IoT devices. However, for time-sensitive tasks, reducing end-to-end delay is a major concern. With advancements in the IoT industry, the computation requirements of incoming tasks at the cloud are escalating, resulting in compromised quality of service. Fog computing emerged to alleviate such issues. However, the resources at the fog layer are limited and require efficient usage. The Whale Optimization Algorithm is a promising meta-heuristic algorithm extensively used to solve various optimization problems. However, being an exploitation-driven technique, its exploration potential is limited, resulting in reduced solution diversity, local optima, and poor convergence. To address these issues, this study proposes a dynamic opposition learning approach to enhance the Whale Optimization Algorithm to offload independent tasks. Opposition-Based Learning (OBL) has been extensively used to improve the exploration capability of the Whale Optimization Algorithm. However, it is computationally expensive and requires efficient utilization of appropriate OBL strategies to fully realize its advantages. Therefore, our proposed algorithm employs three OBL strategies at different stages to minimize end-to-end delay and improve load balancing during task offloading. First, basic OBL and quasi-OBL are employed during population initialization. Then, the proposed dynamic partial-opposition method enhances search space exploration using an information-based triggering mechanism that tracks the status of each agent. The results illustrate significant performance improvements by the proposed algorithm compared to SACO, PSOGA, IPSO, and oppoCWOA using the NASA Ames iPSC and HPC2N workload datasets.

  • research-article
    Xiao Wang, Yanqi Zhao, Lingyue Zhang, Min Xie, Yong Yu, Huilin Li

    With the emergence of illegal behaviors such as money laundering and extortion, the regulation of privacy-preserving cryptocurrency has become increasingly important. However, existing regulated privacy-preserving cryptocurrencies usually rely on a single regulator, which seriously threatens users’ privacy once the regulator is corrupt. To address this issue, we propose a linkable group signature against malicious regulators (ALGS) for regulated privacy-preserving cryptocurrencies. Specifically, a set of regulators work together to regulate users’ behavior during cryptocurrencies transactions. Even if a certain number of regulators are corrupted, our scheme still ensures the identity security of a legal user. Meanwhile, our scheme can prevent double-spending during cryptocurrency transactions. We first propose the model of ALGS and define its security properties. Then, we present a concrete construction of ALGS, which provides CCA-2 anonymity, traceability, non-frameability, and linkability. We finally evaluate our ALGS scheme and report its advantages by comparing other schemes. The implementation result shows that the runtime of our signature algorithm is reduced by 17% compared to Emura et al. (2017) and 49% compared to KSS19 (Krenn et al. 2019), while the verification time is reduced by 31% compared to Emura et al. and 47% compared to KSS19.

  • research-article
    Yanqi Zhao, Jie Zhang, Xiaoyi Yang, Minghong Sun, Yuxin Zhang, Yong Yu, Huilin Li

    Monero uses ring signatures to protect users’ privacy. However, Monero’s anonymity covers various illicit activities, such as money laundering, as it becomes difficult to identify and punish malicious users. Therefore, it is necessary to regulate illegal transactions while protecting the privacy of legal users. We present a revocable linkable ring signature scheme (RLRS), which balances the privacy and supervision for privacy-preserving blockchain transactions. By setting the role of revocation authority, we can trace the malicious user and revoke it in time. We define the security model of the revocable linkable ring signature and give the concrete construction of RLRS. We employ accumulator and ElGamal encryption to achieve the functionalities of revocation and tracing. In addition, we compress the ring signature size to the logarithmic level by using non-interactive sum arguments of knowledge (NISA). Then, we prove the security of RLRS, which satisfies anonymity, unforgeability, linkability, and non-frameability. Lastly, we compare RLRS with other ring signature schemes. RLRS is linkable, traceable, and revocable with logarithmic communication complexity and less computational overhead. We also implement RLRS scheme and the results show that its verification time is 1.5s with 500 ring members.

  • research-article
    Hao Yu, Guijuan Wang, Anming Dong, Yubing Han, Yawei Wang, Jiguo Yu

    With the growth of the Internet of Things (IoT), millions of users, devices, and applications compose a complex and heterogeneous network, which increases the complexity of digital identity management. Traditional centralized digital identity management systems (DIMS) confront single points of failure and privacy leakages. The emergence of blockchain technology presents an opportunity for DIMS to handle the single point of failure problem associated with centralized architectures. However, the transparency inherent in blockchain technology still exposes DIMS to privacy leakages. In this paper, we propose the privacy-protected IoT DIMS (PPID), a novel blockchain-based distributed identity system to protect the privacy of on-chain identity data. The PPID achieves the unlinkability of identity-credential-verification. Specifically, the PPID adopts the Zero Knowledge Proof (ZKP) algorithm and Shamir secret sharing (SSS) to safeguard privacy security, resist replay attacks, and ensure data integrity. Finally, we evaluate the performance of ZKP computation in PPID, as well as the transaction fees of smart contract on the Ethereum blockchain.

  • research-article
    Duaa S. Alqattan, Vaclav Snasel, Rajiv Ranjan, Varun Ojha

    Hierarchical Federated Learning (HFL) extends traditional Federated Learning (FL) by introducing multi-level aggregation in which model updates pass through clients, edge servers, and a global server. While this hierarchical structure enhances scalability, it also increases vulnerability to adversarial attacks — such as data poisoning and model poisoning — that disrupt learning by introducing discrepancies at the edge server level. These discrepancies propagate through aggregation, affecting model consistency and overall integrity. Existing studies on adversarial behaviour in FL primarily rely on single-metric approaches — such as cosine similarity or Euclidean distance — to assess model discrepancies and filter out anomalous updates. However, these methods fail to capture the diverse ways adversarial attacks influence model updates, particularly in highly heterogeneous data environments and hierarchical structures. Attackers can exploit the limitations of single-metric defences by crafting updates that seem benign under one metric while remaining anomalous under another. Moreover, prior studies have not systematically analysed how model discrepancies evolve over time, vary across regions, or affect clustering structures in HFL architectures. To address these limitations, we propose the Model Discrepancy Score (MDS), a multi-metric framework that integrates Dissimilarity, Distance, Uncorrelation, and Divergence to provide a comprehensive analysis of how adversarial activity affects model discrepancies. Through temporal, spatial, and clustering analyses, we examine how attacks affect model discrepancies at the edge server level in 3LHFL and 4LHFL architectures and evaluate MDS’s ability to distinguish between benign and malicious servers. Our results show that while 4LHFL effectively mitigates discrepancies in regional attack scenarios, it struggles with distributed attacks due to additional aggregation layers that obscure distinguishable discrepancy patterns over time, across regions, and within clustering structures. Factors influencing detection include data heterogeneity, attack sophistication, and hierarchical aggregation depth. These findings highlight the limitations of single-metric approaches and emphasize the need for multi-metric strategies such as MDS to enhance HFL security.

  • research-article
    Chuxiao Su, Jing Wu, Rui Zhang, Zi Kang, Hui Xia, Cheng Zhang

    Federated learning has emerged as a popular paradigm for distributed machine learning, enabling participants to collaborate on model training while preserving local data privacy. However, a key challenge in deploying federated learning in real-world applications arises from the substantial heterogeneity in local data distributions across participants. These differences can have negative consequences, such as degraded performance of aggregated models. To address this issue, we propose a novel approach that advocates decomposing the skewed original task into a series of relatively balanced subtasks. Decomposing the task allows us to derive unbiased features extractors for the subtasks, which are then utilized to solve the original task. Based on this concept, we have developed the FedBS algorithm. Through comparative experiments on various datasets, we have demonstrated that FedBS outperforms traditional federated learning algorithms such as FedAvg and FedProx in terms of accuracy, convergence speed, and robustness. The main reason behind these improvements is that FedBS addresses the data heterogeneity problem in federated learning by decomposing the original task into smaller, more balanced subtasks, thereby more effectively mitigating imbalances during model training.

  • research-article
    Taehoon Kim, Dahee Seo, Im-Yeong Lee, Su-Hyun Kim
    2025, 5(4): 100326-100326. https://doi.org/10.1016/j.hcc.2025.100326

    This paper proposes a novel scheme that enhances privacy and ensures accountability by mitigating signature-based correlation risks in decentralized identifiers (DIDs). Existing DIDs often rely on traditional digital signatures, making them vulnerable to attacks that link user identities across transactions. Our proposed scheme leverages attribute-based signatures (ABS) to provide anonymous authentication, preventing such correlation and protecting user privacy. To deter the abuse of anonymity, it incorporates a traceability mechanism, enabling authorized entities to trace a user’s DID when necessary. The scheme’s security, including anonymity and traceability, is formally proven under the random oracle model.