IoV environment exploring coordination: A federated learning approach

Tong Ding , Lei Liu , Yi Zhu , Lizhen Cui , Zhongmin Yan

›› 2024, Vol. 10 ›› Issue (1) : 135 -141.

PDF
›› 2024, Vol. 10 ›› Issue (1) :135 -141. DOI: 10.1016/j.dcan.2022.07.006
Special issue on federated deep learning empowered internet of vehicles
research-article

IoV environment exploring coordination: A federated learning approach

Author information +
History +
PDF

Abstract

Exploring open fields with coordinated unmanned vehicles is popular in academia and industries. One of the most impressive applicable approaches is the Internet of Vehicles (IoV). The IoV connects vehicles, road infrastructures, and communication facilities to provide solutions for exploration tasks. However, the coordination of acquiring information from multi-vehicles may risk data privacy. To this end, sharing high-quality experiences instead of raw data has become an urgent demand. This paper employs a Deep Reinforcement Learning (DRL) method to enable IoVs to generate training data with prioritized experience and states, which can support the IoV to explore the environment more efficiently. Moreover, a Federated Learning (FL) experience sharing model is established to guarantee the vehicles' privacy. The numerical results show that the proposed method presents a better successful sharing rate and a more stable convergence within the comparison of fundamental methods. The experiments also suggest that the proposed method could support agents without full information to achieve the tasks.

Keywords

Internet of vehicle / Deep reinforcement learning / Federated learning / Data privacy

Cite this article

Download citation ▾
Tong Ding, Lei Liu, Yi Zhu, Lizhen Cui, Zhongmin Yan. IoV environment exploring coordination: A federated learning approach. , 2024, 10(1): 135-141 DOI:10.1016/j.dcan.2022.07.006

登录浏览全文

4963

注册一个新账户 忘记密码

References

[1]

Z. Li, V. Sharma, S.P. Mohanty, Preserving data privacy via federated learning: challenges and solutions, IEEE Consum. Electron. Magn. 9 (3) (2020) 8-16.

[2]

S. Kemna, J.G. Rogers, C. Nieto-Granda, S. Young, G.S. Sukhatme, Multi-robot coordination through dynamic voronoi partitioning for informative adaptive sampling in communication-constrained environments, in: Proceedings of the IEEE International Conference on Robotics and Automation, IEEE, 2017, pp. 2124-2130.

[3]

J. Posner, L. Tseng, M. Aloqaily, Y. Jararweh, Federated learning in vehicular networks: opportunities and solutions, IEEE Netw. 35 (2) (2021) 152-159.

[4]

J. Wu, J. Wang, Q. Chen, Z. Yuan, P. Zhou, X. Wang, C. Fu, Resource allocation for delay-sensitive vehicle-to-multi-edges (v2es) communications in vehicular networks: a multi-agent deep reinforcement learning approach, IEEE Trans. Netw. Sci. Eng. 8 (2) (2021) 1873-1886.

[5]

K. Liu, X. Xu, M. Chen, B. Liu, L. Wu, V.C.S. Lee, A hierarchical architecture for the future internet of vehicles, IEEE Commun. Mag. 57 (7) (2019) 41-47.

[6]

H. Zhang, L. Hanzo, Federated learning assisted multi-uav networks, IEEE Trans. Veh. Technol. 69 (11) (2020) 14104-14109.

[7]

W. Huang, K. Ota, M. Dong, T. Wang, S. Zhang, J. Zhang, Result return aware offloading scheme in vehicular edge net- works for iot, Comput. Commun. 164 (2020) 201-214.

[8]

H. Li, K. Ota, M. Dong, Ls-sdv: virtual network management in large-scale software-defined iot, IEEE J. Sel. Area. Commun. 37 (8) (2019) 1783-1793.

[9]

S. Feng, J. Xi, C. Gong, J. Gong, S. Hu, Y. Ma, A collaborative decision making approach for multi-unmanned combat vehicles based on the behavior tree, in: Proceedings of the 2020 3rd International Conference on Unmanned Systems, IEEE, 2020, pp. 395-400.

[10]

D.H. Nguyen, Edge dynamics based distributed formation controller design for unmanned vehicle groups, in: Proceedings of the 2019 IEEE Vehicle Power and Propulsion Conference, IEEE, 2019, pp. 1-5.

[11]

A.O. Hariri, M.E. Hariri, T. Youssef, O. Mohammed,A de- centralized multi-agent system for management of en route electric vehicles, in: Proceedings of the SoutheastCon 2018, IEEE, 2018, pp. 1-6.

[12]

N. Piperigkos, A.S. Lalos, K. Berberidis, C. Anagnostopoulos, Cooperative multi-modal localization in connected and autonomous vehicles, in: Proceedings of the 2020 IEEE 3rd Connected and Automated Vehicles Symposium, IEEE, 2020, pp. 1-5.

[13]

Y. Guo, J. Cheng, S. Luo, D. Gong, Y. Xue, Robust dynamic multi-objective vehicle routing optimization method, IEEE ACM Trans. Comput. Biol. Bioinf 15 (6) (2018) 1891-1903.

[14]

C. Liu, K. Liu, S. Guo, R. Xie, V.C.S. Lee, S.H. Son, Adaptive offloading for time-critical tasks in heterogeneous inter- net of vehicles, IEEE Internet Things J. 7 (9)(2020) 7999-8011.

[15]

H. Ye, G.Y. Li, B.-H.F. Juang, Deep reinforcement learning based resource allocation for v2v communications, IEEE Trans. Veh. Technol. 68 (4) (2019) 3163-3173.

[16]

Z. Ning, J. Huang, X. Wang, Vehicular fog computing: enabling real-time traffic management for smart cities, IEEE Wireless Commun. 26 (1) (2019) 87-93.

[17]

H. Ye, L. Liang, G. Ye Li, J. Kim, L. Lu, M. Wu, Machine learning for vehicular networks: recent advances and application examples, IEEE Veh. Technol. Mag. 13 (2) (2018) 94-101.

[18]

H. Li, K. Ota, M. Dong, Deep reinforcement scheduling for mobile crowdsensing in fog computing, ACM Trans. Internet Technol. 19 (2) (2019) 1-18.

[19]

W. Xia, H. Li, B. Li,A control strategy of autonomous vehicles based on deep reinforcement learning, in:Proceedings of the 9th International Symposium on Computational Intelligence and Design, 2016, pp. 198-201.

[20]

J. Dong, W. Wu, Y. Gao, X. Wang, P. Si, Deep reinforcement learning based worker selection for distributed machine learning enhanced edge intelligence in internet of vehicles, Intell. Converg. Netw. 1 (3) (2020) 234-242.

[21]

Z. Zhang, D. Wang, J. Gao, Learning automata-based multi-agent reinforcement learning for optimization of cooperative tasks, IEEE Transact. Neural Networks Learn. Syst. 32 (10) (2021) 4639-4652.

[22]

V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, M. Riedmiller,Playing atari with deep reinforcement learning, in:Proceedings of the Neural Information Processing Systems Deep Learning Workshop, 2013, pp. 1-9.

[23]

H.V. Hasselt, A. Guez, D. Silver,Deep reinforcement learning with double q- learning, in:Proceedings of the AAAI Conference on Artificial Intelligence, 2015, pp. 1-7.

[24]

T. Schaul, J. Quan, I. Antonoglou, D. Silver,Prioritized experience replay, in:Proceedings of the International Conference on Learning Representations, 2015, pp. 1-11.

[25]

W. Ziyu, S. Tom, H. Matteo, V.H. Hado, L. Marc, d.F. Nando,Dueling network architectures for deep reinforcement learning, in:Proceedings of the 33rd International Conference on International Conference on Machine Learning, 2016, pp. 1995-2003.

[26]

R. Doku, D.B. Rawat, C. Liu,Towards federated learning approach to determine data relevance in big data, in:Proceedings of the International Conference on Information Reuse and Integration for Data Science, 2019, pp. 184-192.

[27]

Y. Lu, X. Huang, Y. Dai, S. Maharjan, Y. Zhang, Federated learning for data privacy preservation in vehicular cyber- physical systems, IEEE Netw. 34 (3) (2020) 50-56.

[28]

H.B. McMahan, E. Moore, D. Ramage, S. Hampson, B.A. Arcas,Communication-efficient learning of deep networks from decentralized data, in:Proceedings of the International Conference on Artificial Intelligence and Statistics, 2017, pp. 1273-1282.

[29]

J. Mills, J. Hu, G. Min, Multi-task federated learning for personalised deep neural networks in edge computing, IEEE Trans. Parallel Distr. Syst. 33 (3) (2022) 630-641.

[30]

J. Mills, J. Hu, G. Min, Communication-efficient federated learning for wireless edge intelligence in iot, IEEE Internet Things J. 7 (7) (2020) 5986-5994.

[31]

S.R. Pokhrel, J. Choi, Improving tcp performance over wifi for internet of vehicles: a federated learning approach, IEEE Trans. Veh. Technol. 69 (6) (2020) 6798-6802.

[32]

S.R. Pokhrel, J. Choi, A decentralized federated learning approach for connected autonomous vehicles, in: Proceedings of the 2020 IEEE Wireless Communications and Networking Conference Workshops, IEEE, 2020, pp. 1-6.

[33]

Y.M. Saputra, D.T. Hoang, D.N. Nguyen, E. Dutkiewicz, M.D. Mueck, S. Srikanteswara, Energy demand prediction with federated learning for electric vehicle networks, in: Proceedings of the 2019 IEEE Global Communications Conference, IEEE, 2019, pp. 1-6.

[34]

Y. Zhao, J. Zhao, M. Yang, T. Wang, N. Wang, L. Lyu, D. Niyato, K.-Y. Lam, Local differential privacy-based federated learning for internet of things, IEEE Internet Things J. 8 (11) (2021) 8836-8853.

[35]

Z. Yu, J. Hu, G. Min, Z. Zhao, W. Miao, M.S. Hossain, Mobility-aware proactive edge caching for connected vehicles using federated learning, IEEE Trans. Intell. Transport. Syst. 22 (8) (2021) 5341-5351.

[36]

H.K. Lim, J.B. Kim, C.M. Kim, G.Y. Hwang, H.b. Choi, Y.H. Han,Federated reinforcement learning for controlling multiple rotary inverted pendulums in edge computing environments, in:Proceedings of the 2020 International Conference on Artificial Intelligence in Information and Communication, 2020, pp. 463-464.

[37]

Y. Zhan, P. Li, Z. Qu, D. Zeng, S. Guo, A learning-based incentive mechanism for federated learning, IEEE Internet Things J. 7 (7) (2020) 6360-6368.

[38]

X. Zhang, M. Peng, S. Yan, Y. Sun, Deep-reinforcement- learning-based mode selection and resource allocation for cellular v2x communications, IEEE Internet Things J. 7 (7) (2020) 6380-6391.

[39]

D. Kwon, J. Jeon, S. Park, J. Kim, S. Cho, Multiagent ddpg-based deep learning for smart ocean federated learning iot networks, IEEE Internet Things J. 7 (10) (2020) 9895-9903.

AI Summary AI Mindmap
PDF

48

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/