Personalized federated learning for semantic communication with collaborative fine-tuning✩
Maochuan Wu , Juan Li , Jing Xu , Bing Chen , Kun Zhu
›› 2026, Vol. 12 ›› Issue (2) : 306 -318.
Semantic Communication (SemCom) is a promising paradigm for future 6G networks, where communication performance hinges on the effectiveness of SemCom models, particularly the source-channel encoder and decoder. However, training these models faces significant challenges. Firstly, the privacy-sensitive nature of communication data discourages users from uploading data to centralized servers. Secondly, heterogeneous local data distributions and diverse communication counterparts of different users necessitate personalized SemCom models. Specifically, a user’s encoder must align with its receivers’ decoders and the transmitted data distribution, while its decoder must adapt to the user’s transmitters and received data distribution. To address these challenges, we propose FineFed, a personalized federated learning method with collaborative fine-tuning. Initially, a unified global model is trained distributively via federated learning, eliminating data uploads. Subsequently, users iteratively fine-tune encoders and decoders collaboratively, achieving SemCom model personalization. For encoder fine-tuning, decoders are fixed and shared with transmitters to address distributed loss calculation issues. Each encoder is fine-tuned using the idea of multi-task learning, treating communication with each receiver as a separate task. Then, encoders are fixed. A user shares its decoder with its own transmitters. These transmitters collaboratively fine-tune the user’s decoder by the idea of federated multi-task learning. Experimental results demonstrate that FineFed improves the average performance of federated SemCom models by 1%-7%, bringing it closer to the performance of centrally-trained models.
Semantic communication / Federated learning / Fine-tuning / Personalized federated learning
| [1] |
|
| [2] |
|
| [3] |
|
| [4] |
|
| [5] |
|
| [6] |
|
| [7] |
|
| [8] |
|
| [9] |
|
| [10] |
|
| [11] |
|
| [12] |
|
| [13] |
|
| [14] |
|
| [15] |
|
| [16] |
|
| [17] |
|
| [18] |
|
| [19] |
|
| [20] |
|
| [21] |
|
| [22] |
|
| [23] |
|
| [24] |
|
| [25] |
|
| [26] |
|
| [27] |
|
| [28] |
|
| [29] |
|
| [30] |
|
| [31] |
|
| [32] |
|
| [33] |
|
| [34] |
|
| [35] |
fastai, Imagenette dataset, https://github.com/fastai/imagenette, 2019. (Accessed 9 June 2025). |
| [36] |
|
| [37] |
|
| [38] |
|
| [39] |
|
| [40] |
|
| [41] |
|
| [42] |
|
| [43] |
|
| [44] |
|
| [45] |
|
| [46] |
|
| [47] |
|
| [48] |
|
| [49] |
|
| [50] |
|
/
| 〈 |
|
〉 |