May 2024, Volume 25 Issue 5
    

  • Select all
  • Editorial
    Xiaoyun WANG, Tao SUN, Yong CUI, Rajkumar BUYYA, Deke GUO, Qun HUANG, Hassnaa MOUSTAFA, Chen TIAN, Shangguang WANG
  • Perspective
    Xiaoyun WANG, Xiaodong DUAN, Kehan YAO, Tao SUN, Peng LIU, Hongwei YANG, Zhiqiang LI
  • Review
    Kang YAN, Nina SHU, Tao WU, Chunsheng LIU, Panlong YANG

    With the booming development of fifth-generation network technology and Internet of Things, the number of end-user devices (EDs) and diverse applications is surging, resulting in massive data generated at the edge of networks. To process these data efficiently, the innovative mobile edge computing (MEC) framework has emerged to guarantee low latency and enable efficient computing close to the user traffic. Recently, federated learning (FL) has demonstrated its empirical success in edge computing due to its privacy-preserving advantages. Thus, it becomes a promising solution for analyzing and processing distributed data on EDs in various machine learning tasks, which are the major workloads in MEC. Unfortunately, EDs are typically powered by batteries with limited capacity, which brings challenges when performing energy-intensive FL tasks. To address these challenges, many strategies have been proposed to save energy in FL. Considering the absence of a survey that thoroughly summarizes and classifies these strategies, in this paper, we provide a comprehensive survey of recent advances in energy-efficient strategies for FL in MEC. Specifically, we first introduce the system model and energy consumption models in FL, in terms of computation and communication. Then we analyze the challenges regarding improving energy efficiency and summarize the energy-efficient strategies from three perspectives: learning-based, resource allocation, and client selection. We conduct a detailed analysis of these strategies, comparing their advantages and disadvantages. Additionally, we visually illustrate the impact of these strategies on the performance of FL by showcasing experimental results. Finally, several potential future research directions for energy-efficient FL are discussed.

  • Xiaojun BAI, Yang ZHANG, Haixing WU, Yuting WANG, Shunfu JIN

    How to collaboratively offload tasks between user devices, edge networks (ENs), and cloud data centers is an interesting and challenging research topic. In this paper, we investigate the offloading decision, analytical modeling, and system parameter optimization problem in a collaborative cloud–edge–device environment, aiming to trade off different performance measures. According to the differentiated delay requirements of tasks, we classify the tasks into delay-sensitive and delay-tolerant tasks. To meet the delay requirements of delay-sensitive tasks and process as many delay-tolerant tasks as possible, we propose a cloud–edge–device collaborative task offloading scheme, in which delay-sensitive and delay-tolerant tasks follow the access threshold policy and the loss policy, respectively. We establish a four-dimensional continuous-time Markov chain as the system model. By using the Gauss–Seidel method, we derive the stationary probability distribution of the system model. Accordingly, we present the blocking rate of delay-sensitive tasks and the average delay of these two types of tasks. Numerical experiments are conducted and analyzed to evaluate the system performance, and numerical simulations are presented to evaluate and validate the effectiveness of the proposed task offloading scheme. Finally, we optimize the access threshold in the EN buffer to obtain the minimum system cost with different proportions of delay-sensitive tasks.

  • Yuexia FU, Jing WANG, Lu LU, Qinqin TANG, Sheng ZHANG

    Under the development of computing and network convergence, considering the computing and network resources of multiple providers as a whole in a computing force network (CFN) has gradually become a new trend. However, since each computing and network resource provider (CNRP) considers only its own interest and competes with other CNRPs, introducing multiple CNRPs will result in a lack of trust and difficulty in unified scheduling. In addition, concurrent users have different requirements, so there is an urgent need to study how to optimally match users and CNRPs on a many-to-many basis, to improve user satisfaction and ensure the utilization of limited resources. In this paper, we adopt a reputation model based on the beta distribution function to measure the credibility of CNRPs and propose a performance-based reputation update model. Then, we formalize the problem into a constrained multi-objective optimization problem and find feasible solutions using a modified fast and elitist non-dominated sorting genetic algorithm (NSGA-Ⅱ). We conduct extensive simulations to evaluate the proposed algorithm. Simulation results demonstrate that the proposed model and the problem formulation are valid, and the NSGA-Ⅱ is effective and can find the Pareto set of CFN, which increases user satisfaction and resource utilization. Moreover, a set of solutions provided by the Pareto set give us more choices of the many-to-many matching of users and CNRPs according to the actual situation.

  • Xueying HAN, Mingxi XIE, Ke YU, Xiaohong HUANG, Zongpeng DU, Huijuan YAO

    Fueled by the explosive growth of ultra-low-latency and real-time applications with specific computing and network performance requirements, the computing force network (CFN) has become a hot research subject. The primary CFN challenge is to leverage network resources and computing resources. Although recent advances in deep reinforcement learning (DRL) have brought significant improvement in network optimization, these methods still suffer from topology changes and fail to generalize for those topologies not seen in training. This paper proposes a graph neural network (GNN) based DRL framework to accommodate network traffic and computing resources jointly and efficiently. By taking advantage of the generalization capability in GNN, the proposed method can operate over variable topologies and obtain higher performance than the other DRL methods.

  • Yizhuo CAI, Bo LEI, Qianying ZHAO, Jing PENG, Min WEI, Yushun ZHANG, Xing ZHANG

    Federated learning effectively addresses issues such as data privacy by collaborating across participating devices to train global models. However, factors such as network topology and computing power of devices can affect its training or communication process in complex network environments. Computing and network convergence (CNC) of sixth-generation (6G) networks, a new network architecture and paradigm with computing-measurable, perceptible, distributable, dispatchable, and manageable capabilities, can effectively support federated learning training and improve its communication efficiency. By guiding the participating devices’ training in federated learning based on business requirements, resource load, network conditions, and computing power of devices, CNC can reach this goal. In this paper, to improve the communication efficiency of federated learning in complex networks, we study the communication efficiency optimization methods of federated learning for CNC of 6G networks that give decisions on the training process for different network conditions and computing power of participating devices. The simulations address two architectures that exist for devices in federated learning and arrange devices to participate in training based on arithmetic power while achieving optimization of communication efficiency in the process of transferring model parameters. The results show that the methods we proposed can cope well with complex network situations, effectively balance the delay distribution of participating devices for local training, improve the communication efficiency during the transfer of model parameters, and improve the resource utilization in the network.

  • Zhaohui WANG, Hongjiao LI, Jinguo LI, Renhao HU, Baojin WANG

    Federated learning (FL), a cutting-edge distributed machine learning training paradigm, aims to generate a global model by collaborating on the training of client models without revealing local private data. The cooccurrence of non-independent and identically distributed (non-IID) and long-tailed distribution in FL is one challenge that substantially degrades aggregate performance. In this paper, we present a corresponding solution called federated dual-decoupling via model and logit calibration (FedDDC) for non-IID and long-tailed distributions. The model is characterized by three aspects. First, we decouple the global model into the feature extractor and the classifier to fine-tune the components affected by the joint problem. For the biased feature extractor, we propose a client confidence re-weighting scheme to assist calibration, which assigns optimal weights to each client. For the biased classifier, we apply the classifier re-balancing method for fine-tuning. Then, we calibrate and integrate the client confidence re-weighted logits with the re-balanced logits to obtain the unbiased logits. Finally, we use decoupled knowledge distillation for the first time in the joint problem to enhance the accuracy of the global model by extracting the knowledge of the unbiased model. Numerous experiments demonstrate that on non-IID and long-tailed data in FL, our approach outperforms state-of-the-art methods.

  • Zhenkai ZHANG, Xiaoke SHANG, Yue XIAO

    Orthogonal time–frequency space (OTFS) is a new modulation technique proposed in recent years for high Doppler wireless scenes. To solve the parameter estimation problem of the OTFS-integrated radar and communications system, we propose a parameter estimation method based on sparse reconstruction preprocessing to reduce the computational effort of the traditional weighted subspace fitting (WSF) algorithm. First, an OTFS-integrated echo signal model is constructed. Then, the echo signal is transformed to the time domain to separate the target angle from the range, and the range and angle of the detected target are coarsely estimated by using the sparse reconstruction algorithm. Finally, the WSF algorithm is used to refine the search with the coarse estimate at the center to obtain an accurate estimate. The simulations demonstrate the effectiveness and superiority of the proposed parameter estimation algorithm.

  • Correspondence
    Yuda DONG, Zetao CHEN, Xin HE, Lijun LI, Zichao SHU, Yinong CAO, Junchi FENG, Shijie LIU, Chunlai LI, Jianyu WANG