2025-06-15 2025, Volume 11 Issue 3

  • Select all
  • Faiq Ahmad Khan , Zainab Umar , Alireza Jolfaei , Muhammad Tariq

    In the field of precision healthcare, where accurate decision-making is paramount, this study underscores the indispensability of eXplainable Artificial Intelligence (XAI) in the context of epilepsy management within the Internet of Medical Things (IoMT). The methodology entails meticulous preprocessing, involving the application of a band-pass filter and epoch segmentation to optimize the quality of Electroencephalograph (EEG) data. The subsequent extraction of statistical features facilitates the differentiation between seizure and non-seizure patterns. The classification phase integrates Support Vector Machine (SVM), K-Nearest Neighbor (KNN), and Random Forest classifiers. Notably, SVM attains an accuracy of 97.26%, excelling in the precision, recall, specificity, and F1 score for identifying seizures and non-seizure instances. Conversely, KNN achieves an accuracy of 72.69%, accompanied by certain trade-offs. The Random Forest classifier stands out with a remarkable accuracy of 99.89%, coupled with an exceptional precision (99.73%), recall (100%), specificity (99.80%), and F1 score (99.86%), surpassing both SVM and KNN performances. XAI techniques, namely Local Interpretable Model-Agnostic Explanations (LIME) and SHapley Additive exPlanation (SHAP), enhance the system's transparency. This combination of machine learning and XAI not only improves the reliability and accuracy of the seizure detection system but also enhances trust and interpretability. Healthcare professionals can leverage the identified important features and their dependencies to gain deeper insights into the decision-making process, aiding in informed diagnosis and treatment decisions for patients with epilepsy.

  • Shashank Srivastava , Kartikeya Kansal , Siva Sai , Vinay Chamola

    Millions of people throughout the world struggle with mental health disorders, but the widespread stigma associated with these issues often prevents them from seeking treatment. We propose a novel strategy that integrates Internet of Medical Things (IoMT), DAG-based hedera technology, and Artificial Intelligence (AI) to overcome these challenges. We also consider the costs of chronic diseases such as Parkinson's and Alzheimer's, which often require 24-hour care. Using smart monitoring tools coupled with AI algorithms that can detect early indicators of deterioration, our system aims to provide low-cost, continuous support. Since IoMT data is large in volume, we need a blockchain network with high transaction throughput without compromising the privacy of patient data. To address this concern, we propose to use Hedera technology to ensure the privacy, and security of personal mental health information, scalability and a faster transaction confirmation rate. Overall, this research paper outlines a holistic approach to mental health monitoring that respects privacy, promotes accessibility, and harnesses the potential of emerging technologies. By combining IoMT, Hedera, and AI, we offer a solution that helps break down the barriers preventing individuals from seeking mental well-being support. Furthermore, comparative analysis shows that our best-performing ML models achieve an accuracy of around 98%, which is more than 30% better than traditional models such as logistic regression.

  • Peng Liu , Wenhua Qian , Huaguang Li , Jinde Cao

    In recent years, deep learning has made significant advancements in skin cancer diagnosis. However, most methods prioritize high prediction accuracy without considering the limitations of computational resources, making them impractical for wearable devices. In this case, knowledge distillation has emerged as an effective method, capable of significantly reducing a model's reliance on computational and storage resources. Nonetheless, previous research suffers from two limitations: 1) the student model can only passively receive knowledge from the teacher model, and 2) the teacher model does not effectively model sample relationships during training, potentially hindering the effective transfer of sample relationship-related knowledge during knowledge distillation. To address these issues, we employ two identical student models, each equipped with a sample relationship module. This design ensures that the student models can mutually learn while modeling sample relationships. We conducted extensive experiments on the ISIC 2019 dataset to validate the effectiveness of our method. The results demonstrate that our approach significantly improves the recognition of various types of skin diseases. Compared to state-of-the-art methods, our approach exhibits higher accuracy and better generalization capabilities.

  • Bo Fang , Zhaocheng Yu , Li-bo Zhang , Yue Teng , Junxin Chen

    Wearable signal analysis is an important technology for monitoring physiological signals without interfering with an individual's daily behavior. As detecting cardiovascular diseases can dramatically reduce mortality, arrhythmia recognition using ECG signals has attracted much attention. In this paper, we propose a single-channel convolutional neural network to detect Atrial Fibrillation (AF) based on ECG signals collected by wearable devices. It contains 3 primary modules. All recordings are firstly uniformly sized, normalized, and Butterworth low-pass filtered for noise removal. Then the preprocessed ECG signals are fed into convolutional layers for feature extraction. In the classification module, the preprocessed signals are fed into convolutional layers containing large kernels for feature extraction, and the fully connected layer provides probabilities. During the training process, the output of the previous pooling layer is concatenated with the vectors of the convolutional layer as a new feature map to reduce feature loss. Numerous comparison and ablation experiments are performed on the 2017 PhysioNet/CinC Challenge dataset, demonstrating the superiority of the proposed method.

  • Patrick Vermander , Aitziber Mancisidor , Raffaele Gravina , Itziar Cabanes , Giancarlo Fortino

    Detecting sitting posture abnormalities in wheelchair users enables early identification of changes in their functional status. To date, this detection has relied on in-person observation by medical specialists. However, given the challenges faced by health specialists to carry out continuous monitoring, the development of an intelligent anomaly detection system is proposed. Unlike other authors, where they use supervised techniques, this work proposes using unsupervised techniques due to the advantages they offer. These advantages include the lack of prior labeling of data, and the detection of anomalies previously not contemplated, among others. In the present work, an individualized methodology consisting of two phases is developed: characterizing the normal sitting pattern and determining abnormal samples. An analysis has been carried out between different unsupervised techniques to study which ones are more suitable for postural diagnosis. It can be concluded, among other aspects, that the utilization of dimensionality reduction techniques leads to improved results. Moreover, the normality characterization phase is deemed necessary for enhancing the system's learning capabilities. Additionally, employing an individualized approach to the model aids in capturing the particularities of the various pathologies present among subjects.

  • Adeel Akram , Muhammad Bilal Khan , Najah Abed Abu Ali , Qixing Zhang , Awais Ahmad , Muhammad Shahid Iqbal , Syed Atif Moqurrab

    The global increase in life expectancy poses challenges related to the safety and well-being of the elderly population, especially in relation to falls. While falls can lead to significant cognitive impairments, timely intervention can mitigate their adverse effects. In this context, the need for non-invasive, efficient monitoring systems becomes paramount. Although wearable sensors have gained traction for monitoring health activities, they may cause discomfort during prolonged use, especially for the elderly. To address this issue, we present an intelligent, non-invasive Software-Defined Radio Frequency (SDRF) sensing system, tailored red for monitoring elderly people's falls during routine activities. Harnessing the power of deep learning and machine learning, our system processes the Wireless Channel State Information (WCSI) generated during regular and fall activities. By employing sophisticated signal processing techniques, the system captures unique patterns that distinguish falls from normal activities. In addition, we use statistical features to streamline data processing, thereby optimizing the computational efficiency of the system. Our experiments, conducted for a typical home environment while using treadmill, demonstrate the robustness of the system. The results show high classification accuracies of 92.5%, 95.1%, and 99.8% for three Artificial Intelligence (AI) algorithms. Notably, the SDRF-based approach offers flexibility, cost-effectiveness, and adaptability through software modifications, circumventing the need for hardware overhaul. This research attempts to bridge the gap in RF-based sensing for elderly fall monitoring, providing a solution that combines the benefits of non-invasiveness with the precision of deep learning and machine learning.

  • Md. Sakib Bin Alam , Aiman Lameesa , Senzuti Sharmin , Shaila Afrin , Shams Forruque Ahmed , Mohammad Reza Nikoo , Amir H. Gandomi

    Deep Learning (DL) offers promising solutions for analyzing wearable signals and gaining valuable insights into cognitive disorders. While previous review studies have explored various aspects of DL in cognitive healthcare, there remains a lack of comprehensive analysis that integrates wearable signals, data processing techniques, and the broader applications, benefits, and challenges of DL methods. Addressing this limitation, our study provides an extensive review of DL's role in cognitive healthcare, with a particular emphasis on wearables, data processing, and the inherent challenges in this field. This review also highlights the considerable promise of DL approaches in addressing a broad spectrum of cognitive issues. By enhancing the understanding and analysis of wearable signal modalities, DL models can achieve remarkable accuracy in cognitive healthcare. Convolutional Neural Network (CNN), Recurrent Neural Network (RNN), and Long Short-term Memory (LSTM) networks have demonstrated improved performance and effectiveness in the early diagnosis and progression monitoring of neurological disorders. Beyond cognitive impairment detection, DL has been applied to emotion recognition, sleep analysis, stress monitoring, and neurofeedback. These applications lead to advanced diagnosis, personalized treatment, early intervention, assistive technologies, remote monitoring, and reduced healthcare costs. Nevertheless, the integration of DL and wearable technologies presents several challenges, such as data quality, privacy, interpretability, model generalizability, ethical concerns, and clinical adoption. These challenges emphasize the importance of conducting future research in areas such as multimodal signal analysis and explainable AI. The findings of this review aim to benefit clinicians, healthcare professionals, and society by facilitating better patient outcomes in cognitive healthcare.

  • Jie Huang , Cheng Yang , Fan Yang , Shilong Zhang , Amr Tolba , Alireza Jolfaei , Keping Yu

    With the development of the future Web of Healthcare Things (WoHT), there will be a trend of densely deploying medical sensors with massive simultaneous online communication requirements. The dense deployment and simultaneous online communication of massive medical sensors will inevitably generate overlapping interference. This will be extremely challenging to support data transmission at the medical-grade quality of service level. To handle the challenge, this paper proposes a hypergraph interference coordination-aided resource allocation based on the Deep Reinforcement Learning (DRL) method. Specifically, we build a novel hypergraph interference model for the considered WoHT by analyzing the impact of the overlapping interference. Due to the high complexity of directly solving the hypergraph interference model, the original resource allocation problem is converted into a sequential decision-making problem through the Markov Decision Process (MDP) modeling method. Then, a policy and value-based resource allocation algorithm is proposed to solve this problem under simultaneous online communication and dense deployment. In addition, to enhance the exploration ability of the optimal allocation strategy for the agent, we propose a resource allocation algorithm with an asynchronous parallel architecture. Simulation results verify that the proposed algorithms can achieve higher network throughput than the existing algorithms in the considered WoHT scenario.

  • Xuejing Liu , Hongli Liu , Xingyu Liang , Yuchao Yao

    Taxation, the primary source of fiscal revenue, has profound implications in guiding resource allocation, promoting economic growth, adjusting social wealth distribution, and enhancing cultural influence. The development of e-taxation provides a enhanced security for taxation, but it still faces the risk of inefficiency and tax data leakage. As a decentralized ledger, blockchain provides an effective solution for protecting tax data and avoiding tax-related errors and fraud. The introduction of blockchain into e-taxation protocols can ensure the public verification of taxes. However, balancing taxpayer identity privacy with regulation remains a challenge. In this paper, we propose a blockchain-based anonymous and regulatory e-taxation protocol. This protocol ensures the supervision and tracking of malicious taxpayers while maintaining honest taxpayer identity privacy, reduces the storage needs for public key certificates in the public key infrastructure, and enables self-certification of taxpayers' public keys and addresses. We formalize the security model of unforgeability for transactions, anonymity for honest taxpayers, and traceability for malicious taxpayers. Security analysis shows that the proposed protocol satisfies unforgeability, anonymity, and traceability. The experimental results of time consumption show that the protocol is feasible in practical applications.

  • Yixing Zhu , Huilin Li , Mengze Li , Yong Yu

    Adaptor signature, a new primitive that alleviates the scalability issue of blockchain to some extent, has been widely adopted in the off-chain payment channel and atomic swap. As an extension of standard digital signature, adaptor signature can bind the release of a complete digital signature with the exchange of a secret value. Existing constructions of adaptor signatures are mainly based on Schnorr or ECDSA signature algorithms, which suffer low signing efficiency and long signature length. In this paper, to address these issues, we propose a new construction of adaptor signature using randomized EdDSA, which has Schnorr-like structure with higher signing efficiency and shorter signature length. We prove the required security properties, including unforgeability, witness extractability and pre-signature adaptability, of the new adaptor signature scheme in the random oracle model. We conduct a comparative analysis with an ECDSA-based adaptor signature scheme to demonstrate the effectiveness and feasibility of our new proposal.

  • Lingyue Zhang , Zongyang Zhang , Tianyu Li , Shancheng Zhang

    Intelligent blockchain is an emerging field that integrates Artificial Intelligence (AI) techniques with blockchain networks, with a particular emphasis on improving the performance of blockchain, especially in cryptocurrencies exchanges. Meanwhile, arbitrage bots are widely deployed and increasing in intelligent blockchain. These bots exploit the characteristics of cryptocurrencies exchanges to engage in frontrunning, generating substantial profits at the expense of ordinary users. In this paper, we address this issue by proposing a more efficient asynchronous Byzantine ordered consensus protocol, which can be used to prevent arbitrage bots from changing the order of the transactions for profits in intelligent blockchain-based cryptocurrencies. Specifically, we present two signal asynchronous common subset protocols, the more optimal one with only constant time complexity. We implement both our protocol and the optimal existing solution Chronos with Go language in the same environment. The experiment results indicate that our protocols achieve a threefold improvement over Chronos in consensus latency and nearly a tenfold increase in throughput.

  • Jin Sun , Lu Wang , Mengna Kang , Kexin Ye

    The multi-keyword sorted searchable encryption is a practical and secure data processing technique. However, most of the existing schemes require each data owner to calculate and store the Inverse Document Frequency (IDF) value, and then dynamically summarize them into a global IDF value. This not only hinders efficient sharing of massive data but also may cause privacy disclosure. Additionally, using a cloud server as storage and computing center can compromise file integrity and create a single point of failure. To address these challenges, our proposal leverages the complex interactive environment and massive data scenarios of the supply chain to introduce a fast and accurate multi-keyword search scheme based on blockchain technology. Specifically, encrypted files are first stored in an Interplanetary File System (IPFS), while secure indexes are stored in a blockchain to eliminate single points of failure. Moreover, we employ homomorphic encryption algorithms to design a blockchain-based index tree that enables dynamic adaptive calculation of IDF values, dynamic update of indexes, and multi-keyword sorting search capabilities. Notably, we have specifically designed a two-round sorting search mode called “Match Sort + Score Sort” for achieving fast and accurate searching performance. Furthermore, fair payment contracts have been implemented on the blockchain to incentivize data sharing activities. Through rigorous safety analysis and comprehensive performance evaluation tests, our scheme has been proven effective as well as practical.

  • Kaiyin Zhu , Mingming Lu , Haifeng Li , Neal N. Xiong , Wenyong He

    As an effective strategy to address urban traffic congestion, traffic flow prediction has gained attention from Federated-Learning (FL) researchers due FL's ability to preserving data privacy. However, existing methods face challenges: some are too simplistic to capture complex traffic patterns effectively, and others are overly complex, leading to excessive communication overhead between cloud and edge devices. Moreover, the problem of single point failure limits their robustness and reliability in real-world applications. To tackle these challenges, this paper proposes a new method, CMBA-FL, a Communication-Mitigated and Blockchain-Assisted Federated Learning model. First, CMBA-FL improves the client model's ability to capture temporal traffic patterns by employing the Encoder-Decoder framework for each edge device. Second, to reduce the communication overhead during federated learning, we introduce a verification method based on parameter update consistency, avoiding unnecessary parameter updates. Third, to mitigate the risk of a single point of failure, we integrate consensus mechanisms from blockchain technology. To validate the effectiveness of CMBA-FL, we assess its performance on two widely used traffic datasets. Our experimental results show that CMBA-FL reduces prediction error by 11.46%, significantly lowers communication overhead, and improves security.

  • Jialei Zhang , Zheng Yan , Huidong Dong , Peng Zhang

    Cross-domain routing in Integrated Heterogeneous Networks (Inte-HetNet) should ensure efficient and secure data transmission across different network domains by satisfying diverse routing requirements. However, current solutions face numerous challenges in continuously ensuring trustworthy routing, fulfilling diverse requirements, achieving reasonable resource allocation, and safeguarding against malicious behaviors of network operators. We propose CrowdRouting, a novel cross-domain routing scheme based on crowdsourcing, dedicated to establishing sustained trust in cross-domain routing, comprehensively considering and fulfilling various customized routing requirements, while ensuring reasonable resource allocation and effectively curbing malicious behavior of network operators. Concretely, CrowdRouting employs blockchain technology to verify the trustworthiness of border routers in different network domains, thereby establishing sustainable and trustworthy cross-domain routing based on sustained trust in these routers. In addition, CrowdRouting ingeniously integrates a crowdsourcing mechanism into the auction for routing, achieving fair and impartial allocation of routing rights by flexibly embedding various customized routing requirements into each auction phase. Moreover, CrowdRouting leverages incentive mechanisms and routing settlement to encourage network domains to actively participate in cross-domain routing, thereby promoting optimal resource allocation and efficient utilization. Furthermore, CrowdRouting introduces a supervisory agency (e.g., undercover agent) to effectively suppress the malicious behavior of network operators through the game and interaction between the agent and the network operators. Through comprehensive experimental evaluations and comparisons with existing works, we demonstrate that CrowdRouting excels in providing trustworthy and fine-grained customized routing services, stimulating active participation in cross-domain routing, inhibiting malicious operator behavior, and maintaining reasonable resource allocation, all of which outperform baseline schemes.

  • Xinzhe Huang , Yujue Wang , Yong Ding , Qianhong Wu , Changsong Yang , Hai Liang

    The immutability is a crucial property for blockchain applications, however, it also leads to problems such as the inability to revise illegal data on the blockchain and delete private data. Although redactable blockchains enable on-chain modification, they suffer from inefficiency and excessive centralization, the majority of redactable blockchain schemes ignore the difficult problems of traceability and consistency check. In this paper, we present a Dynamically Redactable Blockchain based on decentralized Chameleon hash (DRBC). Specifically, we propose an Identity-Based Decentralized Chameleon Hash (IDCH) and a Version-Based Transaction structure (VT) to realize the traceability of transaction modifications in a decentralized environment. Then, we propose an efficient block consistency check protocol based on the Bloom filter tree, which can realize the consistency check of transactions with extremely low time and space cost. Security analysis and experiment results demonstrate the reliability of DRBC and its significant advantages in a decentralized environment.

  • Ning Hui , Qian Sun , Lin Tian , Yuanyuan Wang , Yiqing Zhou

    In 6th Generation Mobile Networks (6G), the Space-Integrated-Ground (SIG) Radio Access Network (RAN) promises seamless coverage and exceptionally high Quality of Service (QoS) for diverse services. However, achieving this necessitates effective management of computation and wireless resources tailored to the requirements of various services. The heterogeneity of computation resources and interference among shared wireless resources pose significant coordination and management challenges. To solve these problems, this work provides an overview of multi-dimensional resource management in 6G SIG RAN, including computation and wireless resource. Firstly it provides with a review of current investigations on computation and wireless resource management and an analysis of existing deficiencies and challenges. Then focusing on the provided challenges, the work proposes an MEC-based computation resource management scheme and a mixed numerology-based wireless resource management scheme. Furthermore, it outlines promising future technologies, including joint model-driven and data-driven resource management technology, and blockchain-based resource management technology within the 6G SIG network. The work also highlights remaining challenges, such as reducing communication costs associated with unstable ground-to-satellite links and overcoming barriers posed by spectrum isolation. Overall, this comprehensive approach aims to pave the way for efficient and effective resource management in future 6G networks.

  • Junhui Zhao , Ruixing Ren , Yao Wu , Qingmiao Zhang , Wei Xu , Dongming Wang , Lisheng Fan

    To improve the accuracy and efficiency of time-varying channels estimation algorithms for millimeter Wave (mmWave) massive Multiple-Input Multiple-Output (MIMO) systems in Internet of Vehicles (IoV) scenarios, the paper proposes a deep learning (DL) algorithm, Squeeze-and-Excitation Attention Residual Network (SEARNet), which integrates Squeeze-and-Excitation Attention (SEAttention) mechanism and residual module. Specifically, SEARNet considers the channel information as an image matrix, and embeds a SEAttention module in residual module to construct the SEAttention-Residual block. Through a data-driven approach, SEARNet can effectively extract key information from the channel matrix using the SEAttention mechanism, thereby reducing noise interference and estimating the channel in an accurate and efficient manner. The simulation results show that compared to two traditional and two DL channel estimation algorithms, the proposed SEARNet can achieve a maximum reduction in normalized mean square error (NMSE) of 97.66% and 84.49% at SNR of -10 dB, 78.18% at SNR of 5 dB, and 43.51% at SNR of 10 dB, respectively.

  • Yuanzhi He , Zhiqiang Li , Zheng Dou

    Given the scarcity of Satellite Frequency and Orbit (SFO) resources, it holds paramount importance to establish a comprehensive knowledge graph of SFO field (SFO-KG) and employ knowledge reasoning technology to automatically mine available SFO resources. An essential aspect of constructing SFO-KG is the extraction of Chinese entity relations. Unfortunately, there is currently no publicly available Chinese SFO entity Relation Extraction (RE) dataset. Moreover, publicly available SFO text data contain numerous NA (representing for “No Answer”) relation category sentences that resemble other relation sentences and pose challenges in accurate classification, resulting in low recall and precision for the NA relation category in entity RE. Consequently, this issue adversely affects both the accuracy of constructing the knowledge graph and the efficiency of RE processes. To address these challenges, this paper proposes a method for extracting Chinese SFO text entity relations based on dynamic integrated learning. This method includes the construction of a manually annotated Chinese SFO entity RE dataset and a classifier combining features of SFO resource data. The proposed approach combines integrated learning and pre-training models, specifically utilizing Bidirectional Encoder Representation from Transformers (BERT). In addition, it incorporates one-class classification, attention mechanisms, and dynamic feedback mechanisms to improve the performance of the RE model. Experimental results show that the proposed method outperforms the traditional methods in terms of F1 value when extracting entity relations from both balanced and long-tailed datasets.

  • Xin Su , Xin Fang , Zhen Cheng , Ziyang Gong , Chang Choi

    Significant breakthroughs in the Internet of Things (IoT) and 5G technologies have driven several smart healthcare activities, leading to a flood of computationally intensive applications in smart healthcare networks. Mobile Edge Computing (MEC) is considered as an efficient solution to provide powerful computing capabilities to latency or energy sensitive nodes. The low-latency and high-reliability requirements of healthcare application services can be met through optimal offloading and resource allocation for the computational tasks of the nodes. In this study, we established a system model consisting of two types of nodes by considering nondivisible and trade-off computational tasks between latency and energy consumption. To minimize processing cost of the system tasks, a Mixed-Integer Nonlinear Programming (MINLP) task offloading problem is proposed. Furthermore, this problem is decomposed into task offloading decisions and resource allocation problems. The resource allocation problem is solved using traditional optimization algorithms, and the offloading decision problem is solved using a deep reinforcement learning algorithm. We propose an Online Offloading based on the Deep Reinforcement Learning (OO-DRL) algorithm with parallel deep neural networks and a weight-sensitive experience replay mechanism. Simulation results show that, compared with several existing methods, our proposed algorithm can perform real-time task offloading in a smart healthcare network in dynamically varying environments and reduce the system task processing cost.

  • Baoping Cheng , Lei Luo , Ziyang He , Ce Zhu , Xiaoming Tao

    Perceptual quality assessment for point cloud is critical for immersive metaverse experience and is a challenging task. Firstly, because point cloud is formed by unstructured 3D points that makes the topology more complex. Secondly, the quality impairment generally involves both geometric attributes and color properties, where the measurement of the geometric distortion becomes more complex. We propose a perceptual point cloud quality assessment model that follows the perceptual features of Human Visual System (HVS) and the intrinsic characteristics of the point cloud. The point cloud is first pre-processed to extract the geometric skeleton keypoints with graph filtering-based re-sampling, and local neighboring regions around the geometric skeleton keypoints are constructed by K-Nearest Neighbors (KNN) clustering. For geometric distortion, the Point Feature Histogram (PFH) is extracted as the feature descriptor, and the Earth Mover's Distance (EMD) between the PFHs of the corresponding local neighboring regions in the reference and the distorted point clouds is calculated as the geometric quality measurement. For color distortion, the statistical moments between the corresponding local neighboring regions are computed as the color quality measurement. Finally, the global perceptual quality assessment model is obtained as the linear weighting aggregation of the geometric and color quality measurement. The experimental results on extensive datasets show that the proposed method achieves the leading performance as compared to the state-of-the-art methods with less computing time. Meanwhile, the experimental results also demonstrate the robustness of the proposed method across various distortion types. The source codes are available at https://github.com/llsurreal919/PointCloudQualityAssessment

  • Wenxin Ma , Weidong Gao , Jiaqi Liu , Kaisa Zhang , Xu Zhao , Bingfeng Cui , Shujuan Sun , Shurong Li

    5G-Advanced (5G-A), an evolutionary iteration of 5G, effectively enhances 5G services. The increasing complexity in downlink services scenarios stresses the necessity for research into the integration of efficient communication with low-carbon solutions. Historically, there has been an emphasis on reliability and precision, at the expense of power consumption. Although energy-saving technologies like Idle mode-Discontinuous Reception (IDRX) and Paging Early Indication (PEI) have been introduced to reduce power consumption in UE, they have not been fully tailored to the paging characteristics of 5G-A downlink services. In this paper, we take full account of the impact of paging message density on energy saving measures and propose an enhanced paging technology, termed Predictive-PEI (PPEI), which is designed to reduce UE overhead while minimizing latency whenever possible. Towards this end, we design a dual threshold decision framework founded on machine learning, mainly involving two steps. We first use the LSTM-FNN neural network to forecast the arrival moment of upcoming paging messages based on past real information. Then, the output of the initial prediction is as the input of the next dual threshold decision algorithm, to determine the optimal moment for transmitting the PEI. The restrictive factors, encompass average delay threshold and cache capacity threshold, playing a role in decisions regarding paging message caching and decoding. Compared to the existing schemes, our PPEI scheme flexibly sends efficient PEI according to the actual paging characteristics by introducing machine learning, resulting in substantial power savings of up to 38.89% while concurrently ensuring effective latency control.

  • Xintong Zhou , Kun Xiao , Feng Ke

    In wireless Energy Harvesting (EH) cooperative networks, we investigate the problem of secure energy-saving resource allocation for downlink physical layer security transmission. Initially, we establish a model for a multi-relay cooperative network incorporating wireless energy harvesting, spectrum sharing, and system power constraints, focusing on physical layer security transmission in the presence of eavesdropping nodes. In this model, the source node transmits signals while injecting Artificial Noise (AN) to mitigate eavesdropping risks, and an idle relay can act as a jamming node to assist in this process. Based on this model, we formulate an optimization problem for maximizing system secure harvesting energy efficiency, this problem integrates constraints on total power, bandwidth, and AN allocation. We proceed by conducting a mathematical analysis of the optimization problem, deriving optimal solutions for secure energy-saving resource allocation, this includes strategies for power allocation at the source and relay nodes, bandwidth allocation among relays, and power splitting for the energy harvesting node. Thus, we propose a secure resource allocation algorithm designed to maximize secure harvesting energy efficiency. Finally, we validate the correctness of the theoretical derivation through Monte Carlo simulations, discussing the impact of parameters such as legitimate channel gain, power splitting factor, and the number of relays on secure harvesting energy efficiency of the system. The simulation results show that the proposed secure energy-saving resource allocation algorithm effectively enhances the security performance of the system.

  • Mochan Fan , Zhipeng Zhang , Zonghang Li , Gang Sun , Hongfang Yu , Jiawen Kang , Mohsen Guizani

    Vertical Federated Learning (VFL), which draws attention because of its ability to evaluate individuals based on features spread across multiple institutions, encounters numerous privacy and security threats. Existing solutions often suffer from centralized architectures, and exorbitant costs. To mitigate these issues, in this paper, we propose SecureVFL, a decentralized multi-party VFL scheme designed to enhance efficiency and trustworthiness while guaranteeing privacy. SecureVFL uses a permissioned blockchain and introduces a novel consensus algorithm, Proof of Feature Sharing (PoFS), to facilitate decentralized, trustworthy, and high-throughput federated training. SecureVFL introduces a verifiable and lightweight three-party Replicated Secret Sharing (RSS) protocol for feature intersection summation among overlapping users. Furthermore, we propose a ??-sharing protocol to achieve federated training in a four-party VFL setting. This protocol involves only addition operations and exhibits robustness. SecureVFL not only enables anonymous interactions among participants but also safeguards their real identities, and provides mechanisms to unmask these identities when malicious activities are performed. We illustrate the proposed mechanism through a case study on VFL across four banks. Finally, our theoretical analysis proves the security of SecureVFL. Experiments demonstrated that SecureVFL outperformed existing multi-party VFL privacy-preserving schemes, such as MP-FedXGB, in terms of both overhead and model performance.

  • Marzieh Sheikhi , Vesal Hakami

    The evolution of enabling technologies in wireless communications has paved the way for supporting novel applications with more demanding QoS requirements, but at the cost of increasing the complexity of optimizing the digital communication chain. In particular, Millimeter Wave (mmWave) communications provide an abundance of bandwidth, and energy harvesting supplies the network with a continual source of energy to facilitate self-sustainability; however, harnessing these technologies is challenging due to the stochastic dynamics of the mmWave channel as well as the random sporadic nature of the harvested energy. In this paper, we aim at the dynamic optimization of update transmissions in mmWave energy harvesting systems in terms of Age of Information (AoI). AoI has recently been introduced to quantify information freshness and is a more stringent QoS metric compared to conventional delay and throughput. However, most prior art has only addressed average-based AoI metrics, which can be insufficient to capture the occurrence of rare but high-impact freshness violation events in time-critical scenarios. We formulate a control problem that aims to minimize the long-term entropic risk measure of AoI samples by configuring the “sense & transmit” of updates. Due to the high complexity of the exponential cost function, we reformulate the problem with an approximated mean-variance risk measure as the new objective. Under unknown system statistics, we propose a two-timescale model-free risk-sensitive reinforcement learning algorithm to compute a control policy that adapts to the trio of channel, energy, and AoI states. We evaluate the efficiency of the proposed scheme through extensive simulations.

  • Hao Zhao , Miaowen Wen , Fei Ji , Yaokun Liang , Hua Yu , Cui Yang

    The Underwater Acoustic (UWA) channel is bandwidth-constrained and experiences doubly selective fading. It is challenging to acquire perfect channel knowledge for Orthogonal Frequency Division Multiplexing (OFDM) communications using a finite number of pilots. On the other hand, Deep Learning (DL) approaches have been very successful in wireless OFDM communications. However, whether they will work underwater is still a mystery. For the first time, this paper compares two categories of DL-based UWA OFDM receivers: the Data-Driven (DD) method, which performs as an end-to-end black box, and the Model-Driven (MD) method, also known as the model-based data-driven method, which combines DL and expert OFDM receiver knowledge. The encoder-decoder framework and Convolutional Neural Network (CNN) structure are employed to establish the DD receiver. On the other hand, an unfolding-based Minimum Mean Square Error (MMSE) structure is adopted for the MD receiver. We analyze the characteristics of different receivers by Monte Carlo simulations under diverse communications conditions and propose a strategy for selecting a proper receiver under different communication scenarios. Field trials in the pool and sea are also conducted to verify the feasibility and advantages of the DL receivers. It is observed that DL receivers perform better than conventional receivers in terms of bit error rate.

  • Chunlong He , Guanhai Lin , Chiya Zhang , Xingquan Li

    The performance of traditional regular Intelligent Reflecting Surface (IRS) improves as the number of IRS elements increases, but more reflecting elements lead to higher IRS power consumption and greater overhead of channel estimation. The Irregular Intelligent Reflecting Surface (IIRS) can enhance the performance of the IRS as well as boost the system performance when the number of reflecting elements is limited. However, due to the lack of radio frequency chain in IRS, it is challenging for the Base Station (BS) to gather perfect Channel State Information (CSI), especially in the presence of Eavesdroppers (Eves). Therefore, in this paper we investigate the minimum transmit power problem of IIRS-aided Simultaneous Wireless Information and Power Transfer (SWIPT) secure communication system with imperfect CSI of BS-IIRS-Eves links, which is subject to the rate outage probability constraints of the Eves, the minimum rate constraints of the Information Receivers (IRs), the energy harvesting constraints of the Energy Receivers (ERs), and the topology matrix constraints. Afterward, the formulated non-convex problem can be efficiently tackled by employing joint optimization algorithm combined with successive refinement method and adaptive topology design method. Simulation results demonstrate the effectiveness of the proposed scheme and the superiority of IIRS.

  • Sa Xiao , Xiaoge Huang , Xuesong Deng , Bin Cao , Qianbin Chen

    To protect user privacy and data security, the integration of Federated Learning (FL) and blockchain has become an emerging research hotspot. However, the limited throughput and high communication complexity of traditional blockchains limit their application in large-scale FL tasks, and the synchronous traditional FL will also reduce the training efficiency. To address these issues, in this paper, we propose a Directed Acyclic Graph (DAG) blockchain-enabled generalized Federated Dropout (FD) learning strategy, which could improve the efficiency of FL while ensuring the model generalization. Specifically, the DAG maintained by multiple edge servers will guarantee the security and traceability of the data, and the Reputation-based Tips Selection Algorithm (RTSA) is proposed to reduce the blockchain consensus delay. Second, the semi-asynchronous training among Intelligent Devices (IDs) is adopted to improve the training efficiency, and a reputation-based FD technology is proposed to prevent overfitting of the model. In addition, a Hybrid Optimal Resource Allocation (HORA) algorithm is introduced to minimize the network delay. Finally, simulation results demonstrate the effectiveness and superiority of the proposed algorithms.

  • Lincong Zhang , Mingyang Zhang , Xiangyu Liu , Lei Guo

    The 6G smart Fog Radio Access Network (F-RAN) is an integration of 6G network intelligence technologies and the F-RAN architecture. Its aim is to provide low-latency and high-performance services for massive access devices. However, the performance of current 6G network intelligence technologies and its level of integration with the architecture, along with the system-level requirements for the number of access devices and limitations on energy consumption, have impeded further improvements in the 6G smart F-RAN. To better analyze the root causes of the network problems and promote the practical development of the network, this study used structured methods such as segmentation to conduct a review of the topic. The research results reveal that there are still many problems in the current 6G smart F-RAN. Future research directions and difficulties are also discussed.

  • Zhuo Chen , Jiahuan Yi , Yang Zhou , Wei Luo

    Blockchain technology, based on decentralized data storage and distributed consensus design, has become a promising solution to address data security risks and provide privacy protection in the Internet-of-Things (IoT) due to its tamper-proof and non-repudiation features. Although blockchain typically does not require the endorsement of third-party trust organizations, it mostly needs to perform necessary mathematical calculations to prevent malicious attacks, which results in stricter requirements for computation resources on the participating devices. By offloading the computation tasks required to support blockchain consensus to edge service nodes or the cloud, while providing data privacy protection for IoT applications, it can effectively address the limitations of computation and energy resources in IoT devices. However, how to make reasonable offloading decisions for IoT devices remains an open issue. Due to the excellent self-learning ability of Reinforcement Learning (RL), this paper proposes a RL enabled Swarm Intelligence Optimization Algorithm (RLSIOA) that aims to improve the quality of initial solutions and achieve efficient optimization of computation task offloading decisions. The algorithm considers various factors that may affect the revenue obtained by IoT devices executing consensus algorithms (e.g., Proof-of-Work), it optimizes the proportion of sub-tasks to be offloaded and the scale of computing resources to be rented from the edge and cloud to maximize the revenue of devices. Experimental results show that RLSIOA can obtain higher-quality offloading decision-making schemes at lower latency costs compared to representative benchmark algorithms.

  • Xin Fan , Guangkai Li , Jianqiang Li , Yue Wang , Chuanwen Luo , Yi Hong , Ting Chen , Yan Huo

    Recognized as a pivotal facet in Beyond Fifth-Generation (B5G) and the upcoming Sixth-Generation (6G) wireless networks, Unmanned Aerial Vehicle (UAV) communications pose challenges due to limited capabilities when serving as mobile base stations, leading to suboptimal service for edge users. To address this, the collaborative formation of Coordinated Multi-Point (CoMP) networks proves instrumental in alleviating the issue of the poor Quality of Service (QoS) at edge users in the network periphery. This paper introduces a groundbreaking solution, the Hybrid Uplink-Downlink Non-Orthogonal Multiple Access (HUD-NOMA) scheme for UAV-aided CoMP networks. Leveraging network coding and NOMA technology, our proposed HUD-NOMA effectively enhances transmission rates for edge users, notwithstanding a minor reduction in signal reception reliability for strong signals. Importantly, the system's overall sum rate is elevated. The proposed HUD-NOMA demonstrates resilience against eavesdroppers by effectively managing intended interferences without the need for additional artificial noise injection. The study employs a stochastic geometry approach to derive the Secrecy Outage Probability (SOP) for the transmissions in the CoMP network, revealing superior performance in transmission rates and lower SOP compared to existing methods through numerical verification. Furthermore, guided by the theoretical SOP derivation, this paper proposes a power allocation strategy to further reduce the system's SOP.