Federated learning on non-IID and long-tailed data via dual-decoupling
Zhaohui WANG, Hongjiao LI, Jinguo LI, Renhao HU, Baojin WANG
Federated learning on non-IID and long-tailed data via dual-decoupling
Federated learning (FL), a cutting-edge distributed machine learning training paradigm, aims to generate a global model by collaborating on the training of client models without revealing local private data. The cooccurrence of non-independent and identically distributed (non-IID) and long-tailed distribution in FL is one challenge that substantially degrades aggregate performance. In this paper, we present a corresponding solution called federated dual-decoupling via model and logit calibration (FedDDC) for non-IID and long-tailed distributions. The model is characterized by three aspects. First, we decouple the global model into the feature extractor and the classifier to fine-tune the components affected by the joint problem. For the biased feature extractor, we propose a client confidence re-weighting scheme to assist calibration, which assigns optimal weights to each client. For the biased classifier, we apply the classifier re-balancing method for fine-tuning. Then, we calibrate and integrate the client confidence re-weighted logits with the re-balanced logits to obtain the unbiased logits. Finally, we use decoupled knowledge distillation for the first time in the joint problem to enhance the accuracy of the global model by extracting the knowledge of the unbiased model. Numerous experiments demonstrate that on non-IID and long-tailed data in FL, our approach outperforms state-of-the-art methods.
Federated learning / Non-IID / Long-tailed data / Decoupling learning / Knowledge distillation
/
〈 | 〉 |