Distributed unsupervised meta-learning algorithm over multi-agent systems

Zhenzhen Wang , Bing He , Zixin Jiang , Xianyang Zhang , Haidi Dong , Di Ye

›› 2026, Vol. 12 ›› Issue (1) : 134 -142.

PDF
›› 2026, Vol. 12 ›› Issue (1) :134 -142. DOI: 10.1016/j.dcan.2024.08.006
Special issue on cyber-physical systems for intelligent transportation and smart cities
research-article

Distributed unsupervised meta-learning algorithm over multi-agent systems

Author information +
History +
PDF

Abstract

Multi-Agent Systems (MAS), which consist of multiple interacting agents, are crucial in Cyber-Physical Systems (CPS), because they improve system adaptability, efficiency, and robustness through parallel processing and collaboration. However, most existing unsupervised meta-learning methods are centralized and not suitable for multi-agent systems where data are distributed stored and inaccessible to all agents. Meta-GMVAE, based on Variational Autoencoder (VAE) and set-level variational inference, represents a sophisticated unsupervised meta-learning model that improves generative performance by efficiently learning data representations across various tasks, increasing adaptability and reducing sample requirements. Inspired by these advancements, we propose a novel Distributed Unsupervised Meta-Learning (DUML) framework based on Meta-GMVAE and a fusion strategy. Furthermore, we present a DUML algorithm based on Gaussian Mixture Model (DUMLGMM), where the parameters of the Gaussian-mixture are solved by an Expectation-Maximization algorithm. Simulations on Omniglot and MiniImageNet datasets show that DUMLGMM can achieve the performance of the corresponding centralized algorithm and outperform non-cooperative algorithm.

Keywords

Unsupervised meta-learning / Multi-agent systems / Variational autoencoder / Gaussian mixture model

Cite this article

Download citation ▾
Zhenzhen Wang, Bing He, Zixin Jiang, Xianyang Zhang, Haidi Dong, Di Ye. Distributed unsupervised meta-learning algorithm over multi-agent systems. , 2026, 12(1): 134-142 DOI:10.1016/j.dcan.2024.08.006

登录浏览全文

4963

注册一个新账户 忘记密码

CRediT authorship contribution statement

Zhenzhen Wang: Writing-review & editing, Writing-original draft, Methodology, Data curation. Bing He: Supervision, Conceptual-ization. Zixin Jiang: Visualization, Validation. Xianyang Zhang: Writ-ing-original draft, Software. Haidi Dong: Resources, Formal analysis. Di Ye: Visualization, Resources.

Declaration of competing interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgements

This work was supported by the National Natural Science Foundation of China Youth Fund (No. 62101579).

References

[1]

S. Zanero, Cyber-physical systems, Computer 50 (4) (2017) 14-16.

[2]

H. Yao, Y. Liu, Y. Wei, X. Tang, Z. Li, Learning from multiple cities: a meta-learning approach for spatial-temporal prediction,in: The World Wide Web Conference, 2019, pp. 2181-2191.

[3]

J. Shi, H. Yao, X. Wu, T. Li, Z. Lin, T. Wang, B. Zhao, Relation-aware meta-learning for market segment demand prediction with limited records, arXiv preprint, arXiv: 2008. 00181, 2020.

[4]

H. Xu, J. Wang, H. Li, D. Ouyang, J. Shao, Unsupervised meta-learning for few-shot learning, Pattern Recognit. 116 (2021) 107951.

[5]

D.B. Lee, D. Min, S. Lee, S.J. Hwang, Meta-gmvae: mixture of Gaussian vae for unsu-pervised meta-learning,in: International Conference on Learning Representations, 2021, p. 1492.

[6]

T. Zoppi, A. Ceccarelli, T. Puccetti, A. Bondavalli, Which algorithm can detect un-known attacks? Comparison of supervised, unsupervised and meta-learning algo-rithms for intrusion detection, Comput. Secur. 127 (2023) 103107.

[7]

M. Kayaalp, S. Vlaski, A.H. Sayed, Dif-maml: decentralized multi-agent meta-learning, IEEE Open J. Signal Process. 3 (2022) 71-93.

[8]

D.K. Naik, R.J. Mammone, Meta-neural networks that learn by learning, in: [Pro-ceedings 1992] IJCNN International Joint Conference on Neural Networks, vol. 1, IEEE, 1992, pp. 437-442.

[9]

K. Fu, T. Zhang, Y. Zhang, M. Yan, Z. Chang, Z. Zhang, X. Sun, Meta-ssd: towards fast adaptation for few-shot object detection with meta-learning, IEEE Access 7 (2019) 77597-77606.

[10]

S. Qiao, C. Liu, W. Shen, A.L. Yuille,Few-shot image recognition by predicting pa-rameters from activations, in:Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 7229-7238.

[11]

L.C. Melo, Transformers are meta-reinforcement learners, in: International Confer-ence on Machine Learning, PMLR, 2022, pp. 15340-15359.

[12]

R. Vilalta, Y. Drissi, A perspective view and survey of meta-learning, Artif. Intell. Rev. 18 (2002) 77-95.

[13]

C. Finn, P. Abbeel, S. Levine, Model-agnostic meta-learning for fast adaptation of deep networks, in: International Conference on Machine Learning, PMLR, 2017, pp. 1126-1135.

[14]

X. Zhang, C. Hu, B. He, Z. Han, Distributed reptile algorithm for meta-learning over multi-agent systems, IEEE Trans. Signal Process. 70 (2022) 5443-5456.

[15]

A. Gupta, M. Lanctot, A. Lazaridou,Dynamic population-based meta-learning for multi-agent communication with natural language, Adv. Neural Inf. Process. Syst. 34 (2021) 16899-16912.

[16]

K. Harris, I. Anagnostides, G. Farina, M. Khodak, Z.S. Wu, T. Sandholm, Meta-learning in games, arXiv preprint, arXiv:2209. 14110, 2022.

[17]

Y. Zhao, Q. Zhu, Stackelberg meta-learning for strategic guidance in multi-robot tra-jectory planning, in: 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, 2023, pp. 11342-11347.

[18]

Y. Ge, T. Li, Q. Zhu, Scenario-agnostic zero-trust defense with explainable threshold policy: a meta-learning approach, arXiv preprint, arXiv:2303.03349, 2023.

[19]

D.P. Kingma, M. Welling, Auto-encoding variational Bayes, arXiv preprint, arXiv: 1312.6114, 2013.

[20]

W. Zhuang, X. Gan, Y. Wen, S. Zhang, S. Yi,Collaborative unsupervised visual representation learning from decentralized data, in:Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 4912-4921.

[21]

A. Fallah, A. Mokhtari, A. Ozdaglar,Personalized federated learning: a meta-learning approach, arXiv preprint, arXiv:2002. 07948, 2020.

[22]

Y. Jiang, J. Konečny, K. Rush, S. Kannan, Improving federated learning personaliza-tion via model agnostic meta learning, arXiv preprint, arXiv:1909.12488, 2019.

[23]

A.H. Sayed, et al., Adaptation, learning, and optimization over networks, Found. Trends Mach. Learn. 7 (4-5) (2014) 311-801.

[24]

H. Robbins, S. Monro, A stochastic approximation method, Ann. Math. Stat. (1951) 400-407.

[25]

S.-Y. Tu, A.H. Sayed, Diffusion strategies outperform consensus strategies for dis-tributed estimation over adaptive networks, IEEE Trans. Signal Process. 60 (12) (2012) 6217-6234.

[26]

T.T. Doan, S.T. Maguluri, J. Romberg, Fast convergence rates of distributed subgra-dient methods with adaptive quantization, IEEE Trans. Autom. Control 66 (5) (2020) 2191-2205.

[27]

D.P. Kingma, J. Ba, Adam: a method for stochastic optimization, arXiv preprint, arXiv:1412.6980, 2014.

[28]

Cynthia Dwork, Differential privacy, in:International Colloquium on Automata, Lan-guages, and Programming, Springer, 2006, pp. 1-12.

[29]

B.M. Lake, R. Salakhutdinov, J.B. Tenenbaum, Human-level concept learning through probabilistic program induction, Science 350 (6266) (2015) 1332-1338.

[30]

J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, L. Fei-Fei,Imagenet: a large-scale hier-archical image database, in: 2009 IEEE Conference on Computer Vision and Pattern Recognition, 2009, pp. 248-255.

PDF

10

Accesses

0

Citation

Detail

Sections
Recommended

/