FedMcon: an adaptive aggregation method for federated learning via meta controller
Tao SHEN , Zexi LI , Ziyu ZHAO , Didi ZHU , Zheqi LV , Kun KUANG , Shengyu ZHANG , Chao WU , Fei WU
Front. Inform. Technol. Electron. Eng ›› 2025, Vol. 26 ›› Issue (8) : 1378 -1393.
FedMcon: an adaptive aggregation method for federated learning via meta controller
Federated learning (FL) emerged as a novel machine learning setting that enables collaboratively training deep models on decentralized clients with privacy constraints. In the vanilla federated averaging algorithm (FedAvg), the global model is generated by the weighted linear combination of local models, and the weights are proportional to the local data sizes. This methodology, however, encounters challenges when facing heterogeneous and unknown client data distributions, often leading to discrepancies from the intended global objective. The linear combination-based aggregation often fails to address the varied dynamics presented by diverse scenarios, settings, and data distributions inherent in FL, resulting in hindered convergence and compromised generalization. In this paper, we present a new aggregation method, FedMcon, within a framework of meta-learning for FL. We introduce a learnable controller trained on a small proxy dataset and served as an aggregator to learn how to adaptively aggregate heterogeneous local models into a better global model toward the desired objective. The experimental results indicate that the proposed method is effective on extremely non-independent and identically distributed data and it can simultaneously reach 19 times communication speedup in a single FL setting.
Federated learning / Meta-learning / Adaptive aggregation
Zhejiang University Press
Supplementary files
/
| 〈 |
|
〉 |