Communication efficiency optimization of federated learning for computing and network convergence of 6G networks
Yizhuo CAI, Bo LEI, Qianying ZHAO, Jing PENG, Min WEI, Yushun ZHANG, Xing ZHANG
Communication efficiency optimization of federated learning for computing and network convergence of 6G networks
Federated learning effectively addresses issues such as data privacy by collaborating across participating devices to train global models. However, factors such as network topology and computing power of devices can affect its training or communication process in complex network environments. Computing and network convergence (CNC) of sixth-generation (6G) networks, a new network architecture and paradigm with computing-measurable, perceptible, distributable, dispatchable, and manageable capabilities, can effectively support federated learning training and improve its communication efficiency. By guiding the participating devices’ training in federated learning based on business requirements, resource load, network conditions, and computing power of devices, CNC can reach this goal. In this paper, to improve the communication efficiency of federated learning in complex networks, we study the communication efficiency optimization methods of federated learning for CNC of 6G networks that give decisions on the training process for different network conditions and computing power of participating devices. The simulations address two architectures that exist for devices in federated learning and arrange devices to participate in training based on arithmetic power while achieving optimization of communication efficiency in the process of transferring model parameters. The results show that the methods we proposed can cope well with complex network situations, effectively balance the delay distribution of participating devices for local training, improve the communication efficiency during the transfer of model parameters, and improve the resource utilization in the network.
Computing and network convergence / Communication efficiency / Federated learning / Two architectures
/
〈 | 〉 |