2024-12-01 2024, Volume 10 Issue 6

  • Select all
  • research-article
    JungSook Bae, Waqas Khalid, Anseok Lee, Heesoo Lee, Song Noh, Heejung Yu

    As the 6th-Generation (6G) wireless communication networks evolve, privacy concerns are expected due to the transmission of vast amounts of security-sensitive private information. In this context, a Reconfigurable Intelligent Surface (RIS) emerges as a promising technology capable of enhancing transmission efficiency and strengthening information security. This study demonstrates how RISs can play a crucial role in making 6G networks more secure against eavesdropping attacks. We discuss the fundamentals and standardization aspects of RISs, along with an in-depth analysis of Physical-Layer Security (PLS). Our discussion centers on PLS design using RIS, highlighting aspects including beamforming, resource allocation, artificial noise, and cooperative communications. We also identify the research issues, propose potential solutions, and explore future perspectives. Finally, numerical results are provided to support our discussions and demonstrate the enhanced security enabled by RIS.

  • research-article
    Bin Li, Wenshuai Liu, Wancheng Xie

    In this paper, we investigate a Reconfigurable Intelligent Surface (RIS)-assisted secure Symbiosis Radio (SR) network to address the information leakage of the primary transmitter (PTx) to potential eavesdroppers. Specifically, the RIS serves as a secondary transmitter in the SR network to ensure the security of the communication between the PTx and the Primary Receiver (PRx), and simultaneously transmits its information to the PTx concurrently by configuring the phase shifts. Considering the presence of multiple eavesdroppers and uncertain channels in practical scenarios, we jointly optimize the active beamforming of PTx and the phase shifts of RIS to maximize the secrecy energy efficiency of RIS-supported SR networks while satisfying the quality of service requirement and the secure communication rate. To solve this complicated non-convex stochastic optimization problem, we propose a secure beamforming method based on Proximal Policy Optimization (PPO), which is an efficient deep reinforcement learning algorithm, to find the optimal beamforming strategy against eavesdroppers. Simulation results show that the proposed PPO-based method is able to achieve fast convergence and realize the secrecy energy efficiency gain by up to 22% when compared to the considered benchmarks.

  • research-article
    Min Wei, Qianying Zhao, Bo Lei, Yizhuo Cai, Yushun Zhang, Xing Zhang, Wenbo Wang

    Federated Learning (FL) is a novel distributed machine learning methodology that addresses large-scale parallel computing challenges while safeguarding data security. However, the traditional FL model in communication scenarios, whether for uplink or downlink communications, may give rise to several network problems, such as bandwidth occupation, additional network latency, and bandwidth fragmentation. In this paper, we propose an adaptive chained training approach (FedACT) for FL in computing power networks. First, a Computation-driven Clustering Strategy (CCS) is designed. The server clusters clients by task processing delays to minimize waiting delays at the central server. Second, we propose a Genetic-Algorithm-based Sorting (GAS) method to optimize the order of clients participating in training. Finally, based on the table lookup and forwarding rules of the Segment Routing over IPv6 (SRv6) protocol, the sorting results of GAS are written into the SRv6 packet header, to control the order in which clients participate in model training. We conduct extensive experiments on two datasets of CIFAR-10 and MNIST, and the results demonstrate that the proposed algorithm offers improved accuracy, diminished communication costs, and reduced network delays.

  • research-article
    Yan Gu, Feng Cheng, Lijie Yang, Junhui Xu, Xiaomin Chen, Long Cheng

    Cloud workloads are highly dynamic and complex, making task scheduling in cloud computing a challenging problem. While several scheduling algorithms have been proposed in recent years, they are mainly designed to handle batch tasks and not well-suited for real-time workloads. To address this issue, researchers have started exploring the use of Deep Reinforcement Learning (DRL). However, the existing models are limited in handling independent tasks and cannot process workflows, which are prevalent in cloud computing and consist of related subtasks. In this paper, we propose SA-DQN, a scheduling approach specifically designed for real-time cloud workflows. Our approach seamlessly integrates the Simulated Annealing (SA) algorithm and Deep Q-Network (DQN) algorithm. The SA algorithm is employed to determine an optimal execution order of subtasks in a cloud server, serving as a crucial feature of the task for the neural network to learn. We provide a detailed design of our approach and show that SA-DQN outperforms existing algorithms in terms of handling real-time cloud workflows through experimental results.

  • research-article
    Ziyi Lu, Tianxiong Wu, Jinshan Su, Yunting Xu, Bo Qian, Tianqi Zhang, Haibo Zhou

    With the support of Vehicle-to-Everything (V2X) technology and computing power networks, the existing intersection traffic order is expected to benefit from efficiency improvements and energy savings by new schemes such as de-signalization. How to effectively manage autonomous vehicles for traffic control with high throughput at unsignalized intersections while ensuring safety has been a research hotspot. This paper proposes a collision-free autonomous vehicle scheduling framework based on edge-cloud computing power networks for unsignalized intersections where the lanes entering the intersections are undirectional, and designs an efficient communication system and protocol. First, by analyzing the collision point occupation time, this paper formulates an absolute value programming problem. Second, this problem is solved with low complexity by the Edge Intelligence Optimal Entry Time (EI-OET) algorithm based on edge-cloud computing power support. Then, the communication system and protocol are designed for the proposed scheduling scheme to realize efficient and low-latency vehicular communications. Finally, simulation experiments compare the proposed scheduling framework with directional and traditional traffic light scheduling mechanisms, and the experimental results demonstrate its high efficiency, low latency, and low complexity.

  • research-article
    Xiaoge Huang, Hongbo Yin, Qianbin Chen, Yu Zeng, Jianfeng Yao

    To provide diversified services in the intelligent transportation systems, smart vehicles will generate unprecedented amounts of data every day. Due to data security and user privacy issues, Federated Learning (FL) is considered a potential solution to ensure privacy-preserving in data sharing. However, there are still many challenges to applying the traditional synchronous FL directly in the Internet of Vehicles (IoV), such as unreliable communications and malicious attacks. In this paper, we propose a Directed Acyclic Graph (DAG) based Swarm Learning (DSL), which integrates edge computing, FL, and blockchain technologies to provide secure data sharing and model training in IoVs. To deal with the high mobility of vehicles, the dynamic vehicle association algorithm is introduced, which could optimize the connections between vehicles and road side units to improve the training efficiency. Moreover, to enhance the anti-attack property of the DSL algorithm, a malicious attack detection method is adopted, which could recognize malicious vehicles by the site confirmation rate. Furthermore, an accuracy-based reward mechanism is developed to promote vehicles to participate in the model training with honest behaviors. Finally, simulation results demonstrate that the proposed DSL algorithm could achieve better performance in terms of model accuracy, convergence rates and security compared with existing algorithms.

  • research-article
    Daoqi Han, Yang Liu, Fangwei Zhang, Yueming Lu

    Considering the privacy challenges of secure storage and controlled flow, there is an urgent need to realize a decentralized ecosystem of private blockchain for cyberspace. A collaboration dilemma arises when the participants are self-interested and lack feedback of complete information. Traditional blockchains have similar faults, such as trustlessness, single-factor consensus, and heavily distributed ledger, preventing them from adapting to the heterogeneous and resource-constrained Internet of Things. In this paper, we develop the game-theoretic design of a two-sided rating with complete information feedback to stimulate collaborations for private blockchain. The design consists of an evolution strategy of the decision-making network and a computing power network for continuously verifiable proofs. We formulate the optimum rating and resource scheduling problems as two-stage iterative games between participants and leaders. We theoretically prove that the Stackelberg equilibrium exists and the group evolution is stable. Then, we propose a multi-stage evolution consensus with feedback on a block-accounting workload for metadata survival. To continuously validate a block, the metadata of the optimum rating, privacy, and proofs are extracted to store on a lightweight blockchain. Moreover, to increase resource utilization, surplus computing power is scheduled flexibly to enhance security by degrees. Finally, the evaluation results show the validity and efficiency of our model, thereby solving the collaboration dilemma in the private blockchain.

  • research-article
    Chaoqiong Fan, Xinyu Wu, Bin Li, Chenglin Zhao

    Capable of flexibly supporting diverse applications and providing computation services, the Mobile Edge Computing (MEC)-assisted Unmanned Aerial Vehicle (UAV) network is emerging as an innovational paradigm. In this paradigm, the heterogeneous resources of the network, including computing and communication resources, should be allocated properly to reduce computation and communication latency as well as energy consumption. However, most existing works solely focus on the optimization issues with global information, which is generally difficult to obtain in real-world scenarios. In this paper, fully considering the incomplete information resulting from diverse types of tasks, we study the joint task offloading and spectrum allocation problem in UAV network, where free UAV nodes serve as helpers for cooperative computation. The objective is to jointly optimize offloading mode, collaboration pairing, and channel allocation to minimize the weighted network cost. To achieve the purpose with only partial observation, an extensive-form game is introduced to reformulate the problem, and a regret learning-based scheme is proposed to achieve the equilibrium solution. With retrospective improvement property and information set concept, the designed algorithm is capable of combating incomplete information and obtaining more precise allocation patterns for diverse tasks. Numerical results show that our proposed algorithm outperforms the benchmarks across various settings.

  • research-article
    Lujie Guo, Fengxian Guo, Mugen Peng

    Driven by diverse intelligent applications, computing capability is moving from the central cloud to the edge of the network in the form of small cloud nodes, forming a distributed computing power network. Tasked with both packet transmission and data processing, it requires joint optimization of communications and computing. Considering the diverse requirements of applications, we develop a dynamic control policy of routing to determine both paths and computing nodes in a distributed computing power network. Different from traditional routing protocols, additional metrics related to computing are taken into consideration in the proposed policy. Based on the multi-attribute decision theory and the fuzzy logic theory, we propose two routing selection algorithms, the Fuzzy Logic-Based Routing (FLBR) algorithm and the low-complexity Pairwise Multi-Attribute Decision-Making (lPMADM) algorithm. Simulation results show that the proposed policy could achieve better performance in average processing delay, user satisfaction, and load balancing compared with existing works.

  • research-article
    Feifei Shi, Huansheng Ning, Xiaohong Zhang, Rongyang Li, Qiaohui Tian, Shiming Zhang, Yuanyuan Zheng, Yudong Guo, Mahmoud Daneshmand

    The Metaverse depicts a parallel digitalized world where virtuality and reality are fused. It has economic and social systems like those in the real world and provides intelligent services and applications. In this paper, we introduce the Metaverse from a new technology perspective, including its essence, corresponding technical framework, and potential technical challenges. Specifically, we analyze the essence of the Metaverse from its etymology and point out breakthroughs promising to be made in time, space, and contents of the Metaverse by citing Maslow's Hierarchy of Needs. Subsequently, we conclude four pillars of the Metaverse, named ubiquitous connections, space convergence, virtuality and reality interaction, and human-centered communication, and establish a corresponding technical framework. Additionally, we envision open issues and challenges of the Metaverse in the technical aspect. The work proposes a new technology perspective of the Metaverse and will provide further guidance for its technology development in the future.

  • research-article
    Hao Du, Supeng Leng, Jianhua He, Kai Xiong, Longyu Zhou

    Road obstacles that unexpectedly appear due to vehicle breakdowns and accidents are major causes of fatal road accidents. Connected Autonomous Vehicles (CAVs) can be used to avoid collisions to ensure road safety through cooperative sensing and driving. However, the collision avoidance performance of CAVs with unexpected obstacles has not been studied in the existing works. In this paper, we first design a platoon-based collision avoidance framework for CAVs. In this framework, we deploy a Digital Twin (DT) system at the head vehicle in a platoon to reduce communication overhead and decision-making delay based on a proposed trajectory planning scheme. In addition, a DT-assistant system is deployed on the assistant vehicle to monitor vehicles out of the sensing range of the head vehicle for the maintenance of the DT system. In this case, the transmission frequency of kinetic states of platoon members can be reduced to ensure low-overhead communication. Moreover, we design a variable resource reservation interval that can ensure DT synchronization between DT and the assistant system with high reliability. To further improve road safety, an urgency level-based trajectory planning algorithm is proposed to avoid unexpected obstacles considering different levels of emergency risks. Simulation results show that our DT system-based scheme can achieve significant performance gains in unexpected obstacle avoidance. Compared to the existing schemes, it can reduce collisions by 95% and is faster by about 10% passing by the unexpected obstacle.

  • research-article
    Mihaela I. Chidean, Luis Ignacio Jiménez Gil, Javier Carmona-Murillo, David Cortés-Polo

    The exponential growth of the number of network devices in recent years not only entails the need for automation of management tasks, but also leads to the increase of available network data and metadata. 5G and beyond standards already cover those requirements and also include the need to define and use machine learning techniques to take advantage of the data acquired, especially using geolocated Call Detail Record (CDR) data sets. However, this scenario requires novel cellular network analysis methodologies to exploit all these available data, especially for the network usage pattern in order to ease the management tasks. In this work, a novel method based on information theory metrics like the Kullback-Leibler divergence and data classification algorithms is proposed to identify representative urban areas in terms of the network usage pattern. Methodology validation is performed via computer analysis using the Open Big Data CDR data set in the Milan area for different scenarios. Obtained results validate the proposed methodology and also reveal its adaptability in terms of specific scenario characteristics. Network usage patterns are calculated for each representative area, paving the path to several future research lines in network management, such as network usage prediction based on this methodology and using the comportment time series.

  • research-article
    Yuan Ai, Chenxi Liu, Mugen Peng

    Integrating Non-Orthogonal Multiple Access (NOMA) into Fog Radio Access Networks (F-RANs) has shown to be effective in boosting the spectral efficiency, energy efficiency, connectivity, and reducing the latency, thus attracting significant research attention. However, the performance improvement of the NOMA-enabled F-RANs is at the cost of computational overheads, which are commonly neglected in their design and deployment. To address this issue, in this paper, we propose a hybrid dynamic downlink framework for NOMA-enabled F-RANs. In this framework, we first develop a novel network utility function, which takes both the network throughput and computational overheads into consideration, thus enabling us to comprehensively evaluate the performance of different access schemes for F-RANs. Based on the developed network utility function, we further formulate a network utility maximization problem, subject to practical constraints on the decoding order, power allocation, and quality-of-service. To solve this NP-hard problem, we decompose it into two subproblems, namely, a user equipment association and subchannel assignment subproblem and a power allocation subproblem. Three-dimensional matching and sequential convex programming-based algorithms are designed to solve these two subproblems, respectively. Through numerical results, we show how our proposed algorithms can achieve a good balance between the network throughput and computational overheads by judiciously adjusting the maximum transmit power of fog access points. We also show that the proposed NOMA-enabled F-RAN framework can increase, by up to 89%, the network utility, compared to OMA-based F-RANs.

  • research-article
    Haoye Chai, Supeng Leng, Jianhua He, Ke Zhang

    The Internet of Vehicles (IoV) has great potential for Intelligent Transportation Systems (ITS), enabling interactive vehicle applications, such as advanced driving and infotainment. It is crucial to ensure the reliability during the vehicle-to-vehicle interaction process. Although the emerging blockchain has superiority in handling security-related issues, existing blockchain-based schemes show weakness in highly dynamic IoV. Both the transaction broadcast and consensus process require multiple rounds of communication throughout the whole network, while the high relative speed between vehicles and dynamic topology resulting in the intermittent connections will degrade the efficiency of blockchain. In this paper, we propose a Digital Twin (DT)-enabled blockchain framework for dynamic IoV, which aims to reduce both the communication cost and the operational latency of blockchain. To address the dynamic context, we propose a DT construction strategy that jointly considers the DT migration and blockchain computing consumption. Moreover, a communication-efficient Local Perceptual Multi-Agent Deep Deterministic Policy Gradient (LPMA-DDPG) algorithm is designed to execute the DT construction strategy among edge servers in a decentralized manner. The simulation results show that the proposed framework can greatly reduce the communication cost, while achieving good security performance. The dynamic DT construction strategy shows superiority in operation latency compared with benchmark strategies. The decentralized LPMA-DDPG algorithm is helpful for implementing the optimal DT construction strategy in practical ITS.

  • research-article
    Tong Tang, Yi Yang, Dapeng Wu, Ruyan Wang, Zhidu Li

    The Joint Video Experts Team (JVET) has announced the latest generation of the Versatile Video Coding (VVC, H.266) standard. The in-loop filter in VVC inherits the De-Blocking Filter (DBF) and Sample Adaptive Offset (SAO) of High Efficiency Video Coding (HEVC, H.265), and adds the Adaptive Loop Filter (ALF) to minimize the error between the original sample and the decoded sample. However, for chaotic moving video encoding with low bitrates, serious blocking artifacts still remain after in-loop filtering due to the severe quantization distortion of texture details. To tackle this problem, this paper proposes a Convolutional Neural Network (CNN) based VVC in-loop filter for chaotic moving video encoding with low bitrates. First, a blur-aware attention network is designed to perceive the blurring effect and to restore texture details. Then, a deep in-loop filtering method is proposed based on the blur-aware network to replace the VVC in-loop filter. Finally, experimental results show that the proposed method could averagely save 8.3% of bit consumption at similar subjective quality. Meanwhile, under close bit rate consumption, the proposed method could reconstruct more texture information, thereby significantly reducing the blocking artifacts and improving the visual quality.

  • research-article
    Neena Susan Shaji, Raja Muthalagu

    Software-Defined Networking (SDN) improves network management by separating its control logic from the underlying hardware and integrating it into a logically centralized control unit, termed the SDN controller. SDN adaptation is essential for wireless networks because it offers enhanced and data-intensive services. The initial intent of the SDN design was to have a physically centralized controller. However, network experts have suggested logically centralized and physically distributed designs for SDN controllers, owing to issues such as a single point of failure and scalability. This study addressed the security, scalability, reliability, and consistency issues associated with the design of distributed SDN controllers. Moreover, the security issues of an enterprise related to multiple physically distributed controllers in a software-defined wireless local area network (SD-WLAN) were emphasized, and optimal solutions were suggested.

  • research-article
    Zhigang Du, Sunxuan Zhang, Zijia Yao, Zhenyu Zhou, Muhammad Tariq

    Power Line Communications-Artificial Intelligence of Things (PLC-AIoT) combines the low cost and high coverage of PLC with the learning ability of Artificial Intelligence (AI) to provide data collection and transmission capabilities for PLC-AIoT devices in smart parks. With the development of smart parks, their emerging services require secure and accurate time synchronization of PLC-AIoT devices. However, the impact of attackers on the accuracy of time synchronization cannot be ignored. To solve the aforementioned problems, we propose a tampering attack-aware Deep Q-Network (DQN)-based time synchronization algorithm. First, we construct an abnormal clock source detection model. Then, the abnormal clock source is detected and excluded by comparing the time synchronization information between the device and the gateway. Finally, the proposed algorithm realizes the joint guarantee of high accuracy and low delay for PLC-AIoT in smart parks by intelligently selecting the multi-clock source cooperation strategy and timing weights. Simulation results show that the proposed algorithm has better time synchronization delay and accuracy performance.

  • research-article
    Xiao Lin, Ruolin Wu, Haibo Mei, Kun Yang

    Computing Power Network (CPN) is emerging as one of the important research interests in beyond 5G (B5G) or 6G. This paper constructs a CPN based on Federated Learning (FL), where all Multi-access Edge Computing (MEC) servers are linked to a computing power center via wireless links. Through this FL procedure, each MEC server in CPN can independently train the learning models using localized data, thus preserving data privacy. However, it is challenging to motivate MEC servers to participate in the FL process in an efficient way and difficult to ensure energy efficiency for MEC servers. To address these issues, we first introduce an incentive mechanism using the Stackelberg game framework to motivate MEC servers. Afterwards, we formulate a comprehensive algorithm to jointly optimize the communication resource (wireless bandwidth and transmission power) allocations and the computation resource (computation capacity of MEC servers) allocations while ensuring the local accuracy of the training of each MEC server. The numerical data validates that the proposed incentive mechanism and joint optimization algorithm do improve the energy efficiency and performance of the considered CPN.

  • research-article
    Qiuping Zhang, Sheng Sun, Junjie Luo, Min Liu, Zhongcheng Li, Huan Yang, Yuwei Wang

    Various intelligent applications based on non-chain DNN models are widely used in Internet of Things (IoT) scenarios. However, resource-constrained IoT devices usually cannot afford the heavy computation burden and cannot guarantee the strict inference latency requirements of non-chain DNN models. Multi-device collaboration has become a promising paradigm for achieving inference acceleration. However, existing works neglect the possibility of inter-layer parallel execution, which fails to exploit the parallelism of collaborating devices and inevitably prolongs the overall completion latency. Thus, there is an urgent need to pay attention to the issue of non-chain DNN inference acceleration with multi-device collaboration based on inter-layer parallel. Three major challenges to be overcome in this problem include exponential computational complexity, complicated layer dependencies, and intractable execution location selection. To this end, we propose a Topological Sorting Based Bidirectional Search (TSBS) algorithm that can adaptively partition non-chain DNN models and select suitable execution locations at layer granularity. More specifically, the TSBS algorithm consists of a topological sorting subalgorithm to realize parallel execution with low computational complexity under complicated layer parallel constraints, and a bidirectional search subalgorithm to quickly find the suitable execution locations for non-parallel layers. Extensive experiments show that the TSBS algorithm significantly outperforms the state-of-the-arts in the completion latency of non-chain DNN inference, a reduction of up to 22.69%.

  • research-article
    Yuanzhi He, Fan Yang, Guodong Han, Yuanyuan Li

    With the rapid development of satellite communications, satellite antennas attract growing interest, especially the high-throughput SatCom-on-the-move antenna for providing high-speed connectivity in a mobile environment. While conventional antennas, such as parabolic dishes and planar waveguide arrays, enjoy a growing market, new kinds of antennas keep on emerging to meet diversified requirements in various satellite communication scenarios. This paper first introduces the design requirements, categories, and evolutions of SatCom-on-the-move antennas, and then discussed representative designs of mechanical scanning antennas and electronic scanning antennas, including their structures, principles, characteristics, and limitations in practical applications. Given the new requirements of satellite communications, this paper also highlighted some of the latest progress in this field, including the Monolithic Microwave Integrated Circuit (MMIC)-based phased array antenna, the metasurface-based phased array antenna, and their hybrid versions. Finally, some critical challenges facing future antennas are discussed. It is believed that it's necessary to put concerted efforts from antenna, microwave, and material communities, etc., to advance SatCom-on-the-move antennas for the upcoming era of satellite communication.

  • research-article
    Xiaoyi Zhou, Liang Huang, Tong Ye, Weiqiang Sun

    This paper investigates the multi-Unmanned Aerial Vehicle (UAV)-assisted wireless-powered Mobile Edge Computing (MEC) system, where UAVs provide computation and powering services to mobile terminals. We aim to maximize the number of completed computation tasks by jointly optimizing the offloading decisions of all terminals and the trajectory planning of all UAVs. The action space of the system is extremely large and grows exponentially with the number of UAVs. In this case, single-agent learning will require an overlarge neural network, resulting in insufficient exploration. However, the offloading decisions and trajectory planning are two subproblems performed by different executants, providing an opportunity for problem-solving. We thus adopt the idea of decomposition and propose a 2-Tiered Multi-agent Soft Actor-Critic (2T-MSAC) algorithm, decomposing a single neural network into multiple small-scale networks. In the first tier, a single agent is used for offloading decisions, and an online pretrained model based on imitation learning is specially designed to accelerate the training process of this agent. In the second tier, UAVs utilize multiple agents to plan their trajectories. Each agent exerts its influence on the parameter update of other agents through actions and rewards, thereby achieving joint optimization. Simulation results demonstrate that the proposed algorithm can be applied to scenarios with various location distributions of terminals, outperforming existing benchmarks that perform well only in specific scenarios. In particular, 2T-MSAC increases the number of completed tasks by 45.5% in the scenario with uneven terminal distributions. Moreover, the pretrained model based on imitation learning reduces the convergence time of 2T-MSAC by 58.2%.

  • research-article
    Weijian Tao, Zufan Zhang, Xi Liu, Maobin Yang

    In breast cancer grading, the subtle differences between HE-stained pathological images and the insufficient number of data samples lead to grading inefficiency. With its rapid development, deep learning technology has been widely used for automatic breast cancer grading based on pathological images. In this paper, we propose an integrated breast cancer grading framework based on a fusion deep learning model, which uses three different convolutional neural networks as submodels to extract feature information at different levels from pathological images. Then, the output features of each submodel are learned by the fusion network based on stacking to generate the final decision results. To validate the effectiveness and reliability of our proposed model, we perform dichotomous and multiclassification experiments on the Invasive Ductal Carcinoma (IDC) pathological image dataset and a generated dataset and compare its performance with those of the state-of-the-art models. The classification accuracy of the proposed fusion network is 93.8%, the recall is 93.5%, and the F1 score is 93.8%, which outperforms the state-of-the-art methods.

  • research-article
    Zhixiong Chen, Jiawei Yang, Zhenyu Zhou

    In response to the requirements for large-scale device access and ultra-reliable and low-latency communication in the power internet of things, unmanned aerial vehicle-assisted multi-access edge computing can be used to realize flexible access to power services and update large amounts of information in a timely manner. By considering factors such as machine communication traffic, MAC competition access, and information freshness, this paper develops a cross-layer computing framework in which the peak Age of Information (AoI) provides a statistical delay boundary in the finite blocklength regime. We also propose a deep machine learning-based multi-access edge computing offloading algorithm. First, a traffic arrival model is established in which the time interval follows the Beta distribution, and then a business service model is proposed based on the carrier sense multiple access with collision avoidance algorithm. The peak AoI boundary performance of multiple access is evaluated according to stochastic network calculus theory. Finally, an unmanned aerial vehicle-assisted multi-level offloading model with cache is designed, in which the peak AoI violation probability and energy consumption provide the optimization goals. The optimal offloading strategy is obtained using deep reinforcement learning. Compared with baseline schemes based on non-cooperative game theory with stochastic learning automata and random edge unloading, the proposed algorithm improves the overall performance by approximately 3.52 % and 20.73 %, respectively, and provides superior deterministic offloading performance by using the peak AoI boundary.

  • research-article
    Xingquan Li, Hongxia Zheng, Chunlong He, Xiaowen Tian, Xin Lin

    This paper investigates Energy Harvesting Efficiency (EHE) maximization problems for Reconfigurable Intelligent Surface (RIS) aided Simultaneous Wireless Information and Power Transfer (SWIPT). This system focuses on the imperfect RIS-related channel and explores the robust beamforming design to maximize the EHE of all energy receivers while respecting the maximum transmit power of the Access Point (AP), RIS phase shift constraints, and maintaining a minimum signal-to-interference plus noise ratio for all information receivers under both linear and non-linear EH models. To solve these non-convex problem, the channel uncertainty related infinite constraints are approximated by using the S-procedure. With the introduction of slack variables, the transformed subproblems can be iteratively solved using alternating algorithm. Simulation results demonstrate that RIS is able to increase the system EHE.

  • research-article
    Guojun Chen, Kaixuan Xie, Wenqiang Luo, Yinfei Xu, Lun Xin, Tiecheng Song, Jing Hu

    Federated Learning (FL) is an emerging machine learning framework designed to preserve privacy. However, the continuous updating of model parameters over uplink channels with limited throughput leads to a huge communication overload, which is a major challenge for FL. To address this issue, we propose an adaptive gradient quantization approach that enhances communication efficiency. Aiming to minimize the total communication costs, we consider both the correlation of gradients between local clients and the correlation of gradients between communication rounds, namely, in the time and space dimensions. The compression strategy is based on rate distortion theory, which allows us to find an optimal quantization strategy for the gradients. To further reduce the computational complexity, we introduce the Kalman filter into the proposed approach. Finally, numerical results demonstrate the effectiveness and robustness of the proposed rate-distortion optimization adaptive gradient quantization approach in significantly reducing the communication costs when compared to other quantization methods.

  • research-article
    Yuhong Xie, Yuan Zhang, Tao Lin, Zipeng Pan, Si-Ze Qian, Bo Jiang, Jinyao Yan

    Short video applications like TikTok have seen significant growth in recent years. One common behavior of users on these platforms is watching and swiping through videos, which can lead to a significant waste of bandwidth. As such, an important challenge in short video streaming is to design a preloading algorithm that can effectively decide which videos to download, at what bitrate, and when to pause the download in order to reduce bandwidth waste while improving the Quality of Experience (QoE). However, designing such an algorithm is non-trivial, especially when considering the conflicting objectives of minimizing bandwidth waste and maximizing QoE. In this paper, we propose an end-to-end Deep reinforcement learning framework with Action Masking called DAM that leverages domain knowledge to learn an optimal policy for short video preloading. To achieve this, we introduce a reward shaping technique to minimize bandwidth waste and use action masking to make actions more reasonable, reduce playback rebuffering, and accelerate the training process. We have conducted extensive experiments using real-world video datasets and network traces including 4G/WiFi/5G. Our results show that DAM improves the QoE score by 3.73%-11.28% compared to state-of-the-art algorithms, and achieves an average bandwidth waste of only 10.27%-12.07%, outperforming all baseline methods.

  • research-article
    Uchechukwu Awada, Jiankang Zhang, Sheng Chen, Shuangzhi Li, Shouyi Yang

    Recently, several edge deployment types, such as on-premise edge clusters, Unmanned Aerial Vehicles (UAV)-attached edge devices, telecommunication base stations installed with edge clusters, etc., are being deployed to enable faster response time for latency-sensitive tasks. One fundamental problem is where and how to offload and schedule multi-dependent tasks so as to minimize their collective execution time and to achieve high resource utilization. Existing approaches randomly dispatch tasks naively to available edge nodes without considering the resource demands of tasks, inter-dependencies of tasks and edge resource availability. These approaches can result in the longer waiting time for tasks due to insufficient resource availability or dependency support, as well as provider lock-in. Therefore, we present EdgeColla, which is based on the integration of edge resources running across multi-edge deployments. EdgeColla leverages learning techniques to intelligently dispatch multi-dependent tasks, and a variant bin-packing optimization method to co-locate these tasks firmly on available nodes to optimally utilize them. Extensive experiments on real-world datasets from Alibaba on task dependencies show that our approach can achieve optimal performance than the baseline schemes.

  • research-article
    Qianwen Li, Xiang Wang, Qingqi Pei, Xiaohua Chen, Kwok-Yan Lam

    Database watermarking technologies provide an effective solution to data security problems by embedding the watermark in the database to prove copyright or trace the source of data leakage. However, when the watermarked database is used for data mining model building, such as decision trees, it may cause a different mining result in comparison with the result from the original database caused by the distortion of watermark embedding. Traditional watermarking algorithms mainly consider the statistical distortion of data, such as the mean square error, but very few consider the effect of the watermark on database mining. Therefore, in this paper, a consistency preserving database watermarking algorithm is proposed for decision trees. First, label classification statistics and label state transfer methods are proposed to adjust the watermarked data so that the model structure of the watermarked decision tree is the same as that of the original decision tree. Then, the splitting values of the decision tree are adjusted according to the defined constraint equations. Finally, the adjusted database can obtain a decision tree consistent with the original decision tree. The experimental results demonstrated that the proposed algorithm does not corrupt the watermarks, and makes the watermarked decision tree consistent with the original decision tree with a small distortion.

  • research-article
    Miao Wu, Qinghua Zhang, Chengying Wu, Guoyin Wang

    Causality extraction has become a crucial task in natural language processing and knowledge graph. However, most existing methods divide causality extraction into two subtasks: extraction of candidate causal pairs and classification of causality. These methods result in cascading errors and the loss of associated contextual information. Therefore, in this study, based on graph theory, an End-to-end Multi-Granulation Causality Extraction model (EMGCE) is proposed to extract explicit causality and directly mine implicit causality. First, the sentences are represented on different granulation layers, that contain character, word, and contextual string layers. The word layer is fine-grained into three layers: word-index, word-embedding and word-position-embedding layers. Then, a granular causality tree of dataset is built based on the word-index layer. Next, an improved tagREtriplet algorithm is designed to obtain the labeled causality based on the granular causality tree. It can transform the task into a sequence labeling task. Subsequently, the multi-granulation semantic representation is fed into the neural network model to extract causality. Finally, based on the extended public SemEval 2010 Task 8 dataset, the experimental results demonstrate that EMGCE is effective.

  • research-article
    Dawei Wang, Xuanrui Li, Menghan Wu, Yixin He, Yi Lou, Yu Pang, Yi Lu

    In this paper, we study an Intelligent Reflecting Surface (IRS) assisted Mobile Edge Computing (MEC) system under eavesdropping threats, where the IRS is used to enhance the energy signal transmission and the offloading performance between Wireless Devices (WDs) and the Access Point (AP). Specifically, in the proposed scheme, the AP first powers all WDs with the wireless power transfer through both direct and IRS-assisted links. Then, powered by the harvested energy, all WDs securely offload their computation tasks through the two links in the time division multiple access mode. To determine the local and offloading computational bits, we formulate an optimization problem to jointly design the IRS's phase shift and allocate the time slots constrained by the security and energy requirements. To cope with this non-convex optimization problem, we adopt semidefinite relaxations, singular value decomposition techniques, and Lagrange dual method. Moreover, we propose a dichotomy particle swarm algorithm based on the bisection method to process the overall optimization problem and improve the convergence speed. The numerical results illustrate that the proposed scheme can boost the performance of MEC and secure computation rates compared with other IRS-assisted MEC benchmark schemes.

  • research-article
    Xiaoyan Hu, Meiqun Gui, Guang Cheng, Ruidong Li, Hua Wu

    Due to its anonymity and decentralization, Bitcoin has long been a haven for various illegal activities. Cyber-criminals generally legalize illicit funds by Bitcoin mixing services. Therefore, it is critical to investigate the mixing services in cryptocurrency anti-money laundering. Existing studies treat different mixing services as a class of suspicious Bitcoin entities. Furthermore, they are limited by relying on expert experience or needing to deal with large-scale networks. So far, multi-class mixing service identification has not been explored yet. It is challenging since mixing services share a similar procedure, presenting no sharp distinctions. However, mixing service identification facilitates the healthy development of Bitcoin, supports financial forensics for cryptocurrency regulation and legislation, and provides technical means for fine-grained blockchain supervision. This paper aims to achieve multi-class Bitcoin Mixing Service Identification with a Graph Classification (BMSI-GC) model. First, BMSI-GC constructs 2-hop ego networks (2-egonets) of mixing services based on their historical transactions. Second, it applies graph2vec, a graph classification model mainly used to calculate the similarity between graphs, to automatically extract address features from the constructed 2-egonets. Finally, it trains a multilayer perceptron classifier to perform classification based on the extracted features. BMSI-GC is flexible without handling the full-size network and handcrafting address features. Moreover, the differences in transaction patterns of mixing services reflected in the 2-egonets provide adequate information for identification. Our experimental study demonstrates that BMSI-GC performs excellently in multi-class Bitcoin mixing service identification, achieving an average identification F1-score of 95.08%.

  • research-article
    Rongrong Zhang, Mengyu Li, Xinglong Li, Yong Guan, Cheng'an Zhao, Huan Qi, Changquan Qiu

    UAV-aided energy-harvesting WBANs where sensor nodes can harvest RF energy transmitted by a UAV have attracted considerable attention. However, how to maximize the sum-throughput of the sensor nodes with cross-tier interferences from satellite and cellular systems has not been addressed. In order to bridge this gap, we devote this paper to developing two schemes with the fixed or variable duration of the energy-harvesting phase. Specifically, we formulate two non-convex optimization problems corresponding to the two schemes, and transform them to tractable subproblems, and accordingly two algorithms are proposed named HA-PC-TA and UAVPC-TA. In particular, we derive the closed-form optimum transmit power and time allocation for the sensor nodes and obtain the optimum UAV's hovering altitude in HA-PC-TA. We reveal the relationship between the optimum parameters and the allowed maximum transmit power of the VAU in UAVPC-TA. Finally, the simulations are conducted to validate the effectiveness of our proposed solutions.

  • research-article
    Jiangtao Liu, Kai Wu, Tao Su, J. Andrew Zhang

    Joint Radar and Communications (JRC) can implement two Radio Frequency (RF) functions using a single of resources, providing significant hardware, power and spectrum savings for wireless systems requiring both functions. Frequency-Hopping (FH) MIMO radar is a popular candidate for JRC because the achieved communication symbol rate can greatly exceed the radar pulse repetition frequency. However, practical transceiver imperfections can cause many existing theoretical designs to fail. In this work, we reveal for the first time the non-trivial impact of hardware imperfections on FH-MIMO JRC and model the impact analytically. We also design new waveforms and correspondingly develop a low-complexity algorithm to jointly estimate the hardware imperfections of unsynchronized receiver. In addition, using low-cost software-defined radios and Commercial Off-The-Shelf (COTS) products, we build the first FH-MIMO JRC experimental platform with simultaneous over-the-air radar and communication validation. Confirmed by simulation and experimental results, the proposed designs achieve high performance for both radar and communications.