Communication efficiency optimization of federated learning for computing and network convergence of 6G networks

Yizhuo CAI , Bo LEI , Qianying ZHAO , Jing PENG , Min WEI , Yushun ZHANG , Xing ZHANG

Front. Inform. Technol. Electron. Eng ›› 2024, Vol. 25 ›› Issue (5) : 713 -727.

PDF (893KB)
Front. Inform. Technol. Electron. Eng ›› 2024, Vol. 25 ›› Issue (5) : 713 -727. DOI: 10.1631/FITEE.2300122

Communication efficiency optimization of federated learning for computing and network convergence of 6G networks

Author information +
History +
PDF (893KB)

Abstract

Federated learning effectively addresses issues such as data privacy by collaborating across participating devices to train global models. However, factors such as network topology and computing power of devices can affect its training or communication process in complex network environments. Computing and network convergence (CNC) of sixth-generation (6G) networks, a new network architecture and paradigm with computing-measurable, perceptible, distributable, dispatchable, and manageable capabilities, can effectively support federated learning training and improve its communication efficiency. By guiding the participating devices’ training in federated learning based on business requirements, resource load, network conditions, and computing power of devices, CNC can reach this goal. In this paper, to improve the communication efficiency of federated learning in complex networks, we study the communication efficiency optimization methods of federated learning for CNC of 6G networks that give decisions on the training process for different network conditions and computing power of participating devices. The simulations address two architectures that exist for devices in federated learning and arrange devices to participate in training based on arithmetic power while achieving optimization of communication efficiency in the process of transferring model parameters. The results show that the methods we proposed can cope well with complex network situations, effectively balance the delay distribution of participating devices for local training, improve the communication efficiency during the transfer of model parameters, and improve the resource utilization in the network.

Keywords

Computing and network convergence / Communication efficiency / Federated learning / Two architectures

Cite this article

Download citation ▾
Yizhuo CAI, Bo LEI, Qianying ZHAO, Jing PENG, Min WEI, Yushun ZHANG, Xing ZHANG. Communication efficiency optimization of federated learning for computing and network convergence of 6G networks. Front. Inform. Technol. Electron. Eng, 2024, 25(5): 713-727 DOI:10.1631/FITEE.2300122

登录浏览全文

4963

注册一个新账户 忘记密码

References

RIGHTS & PERMISSIONS

Zhejiang University Press

AI Summary AI Mindmap
PDF (893KB)

366

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/