Multi-user reinforcement learning based task migration in mobile edge computing

Yuya CUI, Degan ZHANG, Jie ZHANG, Ting ZHANG, Lixiang CAO, Lu CHEN

PDF(5072 KB)
PDF(5072 KB)
Front. Comput. Sci. ›› 2024, Vol. 18 ›› Issue (4) : 184504. DOI: 10.1007/s11704-023-1346-3
Networks and Communication
RESEARCH ARTICLE

Multi-user reinforcement learning based task migration in mobile edge computing

Author information +
History +

Abstract

Mobile Edge Computing (MEC) is a promising approach. Dynamic service migration is a key technology in MEC. In order to maintain the continuity of services in a dynamic environment, mobile users need to migrate tasks between multiple servers in real time. Due to the uncertainty of movement, frequent migration will increase delays and costs and non-migration will lead to service interruption. Therefore, it is very challenging to design an optimal migration strategy. In this paper, we investigate the multi-user task migration problem in a dynamic environment and minimizes the average service delay while meeting the migration cost. In order to optimize the service delay and migration cost, we propose an adaptive weight deep deterministic policy gradient (AWDDPG) algorithm. And distributed execution and centralized training are adopted to solve the high-dimensional problem. Experiments show that the proposed algorithm can greatly reduce the migration cost and service delay compared with the other related algorithms.

Graphical abstract

Keywords

mobile edge computing / mobility / service migration / deep reinforcement learning / deep deterministic policy gradient

Cite this article

Download citation ▾
Yuya CUI, Degan ZHANG, Jie ZHANG, Ting ZHANG, Lixiang CAO, Lu CHEN. Multi-user reinforcement learning based task migration in mobile edge computing. Front. Comput. Sci., 2024, 18(4): 184504 https://doi.org/10.1007/s11704-023-1346-3

Yuya Cui was a Member of IEEE in 2017. He received his PhD degree from School of Computer Science and Engineering, Tianjin University of Technology, China in 2021, and is currently working at the School of Internet of Things Engineering, Jiangsu Vocational College of Information Technology, China. His research interests include WSN, mobile computing, etc

Degan Zhang received his PhD degree from Northeastern University, China. Now he is a professor of Tianjin Key Lab of Intelligent Computing and Novel Software Technology, Tianjin University of Technology, China. His research interest includes service computing, etc

Jie Zhang was a Member of IEEE in 2020. Now he is a researcher of the School of Electronic and Information Engineering, Beijing Jiaotong University, China. His research interest includes IoT, mobile computing, etc

Ting Zhang was a Member of IEEE in 2008. Now she is a professor of School of Sports Economics and Management, Tianjin University of Sport, China. Her research interest includes ITS, WSN, etc

Lixiang Cao was a Member of IEEE in 2018. Now she is a researcher of Tianjin University of Technology, China. His research interests include WSN, mobile computing, etc

Lu Chen was a Member of IEEE in 2017. Now she is a researcher in Tianjin University of Technology, China. Her research interest includes ITS, WSN, etc

References

[1]
Zhang D G, Wang J X, Zhang J X, Zhang T, Yang C, Jiang K W. A new method of fuzzy multicriteria routing in vehicle Ad Hoc network. IEEE Transactions on Computational Social Systems, 2022, doi: 10.1109/TCSS.2022.3193739
[2]
Zhang D, Li G, Zheng K, Ming X, Pan Z H . An energy-balanced routing method based on forward-aware factor for Wireless Sensor Networks. IEEE Transactions on Industrial Informatics, 2014, 10( 1): 766–773
[3]
Liu S, Zhang D, Liu X, Zhang T, Wu H . Adaptive repair algorithm for TORA routing protocol based on flood control strategy. Computer Communications, 2020, 151: 437–448
[4]
Zhang D G, Dong W M, Zhang T, Zhang P, Zhang P, Sun G X, Cao Y H. New computing tasks offloading method for MEC based on prospect theory framework. IEEE Transactions on Computational Social Systems, 2022, doi: 10.1109/TCSS.2022.3228692
[5]
Zhang D G, Niu H L, Liu S . Novel PEECR-based clustering routing approach. Soft Computing, 2017, 21( 24): 7313–7323
[6]
Chen J, Zhang L, Liang Y C, Kang X, Zhang R . Resource allocation for wireless-powered IoT networks with short packet communication. IEEE Transactions on Wireless Communications, 2019, 18( 2): 1447–1461
[7]
Shan L, Gao S, Chen S, Xu M, Zhang F, Bao X, Chen M . Energy-efficient resource allocation in NOMA-integrated V2X networks. Computer Communications, 2023, 197: 23–33
[8]
Zhang D, Wang W, Zhang J, Zhang T, Du J, Yang C. Novel edge caching approach based on multi-agent deep reinforcement learning for internet of vehicles. IEEE Transactions on Intelligent Transportation Systems, 2023, doi: 10.1109/TITS.2023.3264553
[9]
Yang J, Ding M, Mao G, Lin Z, Zhang D G, Luan T H . Optimal base station antenna downtilt in downlink cellular networks. IEEE Transactions on Wireless Communications, 2019, 18( 3): 1779–1791
[10]
Zhang D G, Cui Y Y, Zhang T . New quantum-genetic based OLSR protocol (QG-OLSR) for mobile Ad hoc network. Applied Soft Computing, 2019, 80: 285–296
[11]
Zhang D, Ge H, Zhang T, Cui Y Y, Liu X, Mao G . New multi-hop clustering algorithm for vehicular Ad Hoc networks. IEEE Transactions on Intelligent Transportation Systems, 2019, 20( 4): 1517–1530
[12]
Chen J, Mao G, Li C, Liang W, Zhang D G . Capacity of cooperative vehicular networks with infrastructure support: multiuser case. IEEE Transactions on Vehicular Technology, 2018, 67( 2): 1546–1560
[13]
Zhang T, Zhang D G, Yan H R, Qiu J N, Gao J X . A new method of data missing estimation with FNN-based tensor heterogeneous ensemble learning for internet of vehicle. Neurocomputing, 2021, 420: 98–110
[14]
Chen L, Zhang D G, Zhang J, Zhang T, Du J Y, Fan H R . An approach of flow compensation incentive based on Q-learning strategy for IoT user privacy protection. AEU–International Journal of Electronics and Communications, 2022, 148: 154172
[15]
Zhang D G, Liu S, Liu X H, Zhang T, Cui Y Y . Novel dynamic source routing protocol (DSR) based on genetic algorithm-bacterial foraging optimization (GA‐BFO). International Journal of Communication Systems, 2018, 31( 18): e3824
[16]
Zhang D, Wang X, Song X, Zhao D . A novel approach to mapped correlation of ID for RFID anti-collision. IEEE Transactions on Services Computing, 2014, 7( 4): 741–748
[17]
Liu S, Zhang D G, Liu X H, Zhang T, Gao J X, Gong C L, Cui Y Y . Dynamic analysis for the average shortest path length of mobile Ad Hoc networks under random failure scenarios. IEEE Access, 2019, 7: 21343–21358
[18]
Zhang D G, Wu H, Zhao P Z, Liu X H, Cui Y Y, Chen L, Zhang T . New approach of multi-path reliable transmission for marginal wireless sensor network. Wireless Networks, 2020, 26( 2): 1503–1517
[19]
Zhang D G, Ni C H, Zhang J, Zhang T, Zhang Z H . New method of vehicle cooperative communication based on fuzzy logic and signaling game strategy. Future Generation Computer Systems, 2023, 142: 131–149
[20]
Cui Y, Zhang D, Zhang T, Chen L, Piao M, Zhu H . Novel method of mobile edge computation offloading based on evolutionary game strategy for IoT devices. AEU-International Journal of Electronics and Communications, 2020, 118: 153134
[21]
Zhang D, Cao L, Zhu H, Zhang T, Du J, Jiang K . Task offloading method of edge computing in internet of vehicles based on deep reinforcement learning. Cluster Computing, 2022, 25( 2): 1175–1187
[22]
Zhang J, Piao M J, Zhang D G, Zhang J, Dong W M . An approach of multi-objective computing task offloading scheduling based NSGS for IOV in 5G. Cluster Computing, 2022, 25( 6): 4203–4219
[23]
Zhang D, Shuo W, Zhang J, Zhu H, Zhang T, Zheng X . A content distribution method of internet of vehicles based on edge cache and immune cloning strategy. Ad Hoc Networks, 2023, 138: 103012
[24]
Zhang D, Piao M, Zhang T, Chen C, Zhu H . New algorithm of multi-strategy channel allocation for edge computing. AEU-International Journal of Electronics and Communications, 2020, 126: 153372
[25]
Cui Y Y, Zhang D G, Zhang T, Zhang J, Piao M . A novel offloading scheduling method for mobile application in mobile edge computing. Wireless Networks, 2022, 28( 6): 2345–2363
[26]
Xiao L, Lu X, Xu T, Wan X, Ji W, Zhang Y . Reinforcement learning-based mobile offloading for edge computing against jamming and interference. IEEE Transactions on Communications, 2020, 68( 10): 6114–6126
[27]
Guan S, Boukerche A. Design and implementation of offloading and resource management techniques in a mobile cloud environment. In: Proceedings of the 17th ACM International Symposium on Mobility Management and Wireless Access. 2019, 97–102
[28]
Guan S, Boukerche A. A MEC-based distributed offloading model for ubiquitous and time-constraint offloading. In: Proceedings of the 23rd IEEE/ACM International Symposium on Distributed Simulation and Real Time Applications. 2019, 1–8
[29]
Guan S, De Grande R E, Boukerche A . A multi-layered scheme for distributed simulations on the cloud environment. IEEE Transactions on Cloud Computing, 2019, 7( 1): 5–18
[30]
Nadembega A, Hafid A, Taleb T . A destination and mobility path prediction scheme for mobile networks. IEEE Transactions on Vehicular Technology, 2015, 64( 6): 2577–2590
[31]
Tang Z, Zhou X, Zhang F, Jia W, Zhao W . Migration modeling and learning algorithms for containers in fog computing. IEEE Transactions on Services Computing, 2019, 12( 5): 712–725
[32]
Yuan Q, Li J, Zhou H, Lin T, Luo G, Shen X . A joint service migration and mobility optimization approach for vehicular edge computing. IEEE Transactions on Vehicular Technology, 2020, 69( 8): 9041–9052
[33]
Wang S, Urgaonkar R, He T, Chan K, Zafer M, Leung K K . Dynamic service placement for mobile micro-clouds with predicted future costs. IEEE Transactions on Parallel and Distributed Systems, 2017, 28( 4): 1002–1016
[34]
Plachy J, Becvar Z, Strinati E C. Dynamic resource allocation exploiting mobility prediction in mobile edge computing. In: Proceedings of the 27th IEEE Annual International Symposium on Personal, Indoor, and Mobile Radio Communications (PIMRC). 2016, 1–6
[35]
Nadembega A, Hafid A S, Brisebois R. Mobility prediction model-based service migration procedure for follow me cloud to support QoS and QoE. In: Proceedings of 2016 IEEE International Conference on Communications (ICC). 2016, 1–6
[36]
Aissioui A, Ksentini A, Gueroui A M, Taleb T . On enabling 5G automotive systems using follow me edge-cloud concept. IEEE Transactions on Vehicular Technology, 2018, 67( 6): 5302–5316
[37]
Zhang W, Chen J, Zhang Y, Raychaudhuri D. Towards efficient edge cloud augmentation for virtual reality MMOGS. In: Proceedings of the 2nd ACM/IEEE Symposium on Edge Computing. 2017, 8
[38]
Zhang C, Zheng Z . Task migration for mobile edge computing using deep reinforcement learning. Future Generation Computer Systems, 2019, 96: 111–118
[39]
Gao Z, Jiao Q, Xiao K, Wang Q, Mo Z, Yang Y. Deep reinforcement learning based service migration strategy for edge computing. In: Proceedings of 2019 IEEE International Conference on Service-Oriented System Engineering (SOSE). 2019, 116–1165
[40]
Li M, Si P, Zhang Y . Delay-tolerant data traffic to software-defined vehicular networks with mobile edge computing in smart city. IEEE Transactions on Vehicular Technology, 2018, 67( 10): 9073–9086
[41]
He Y, Zhao N, Yin H . Integrated networking, caching, and computing for connected vehicles: a deep reinforcement learning approach. IEEE Transactions on Vehicular Technology, 2018, 67( 1): 44–45
[42]
Peng H, Shen X . Deep reinforcement learning based resource management for multi-access edge computing in vehicular networks. IEEE Transactions on Network Science and Engineering, 2020, 7( 4): 2416–2428
[43]
Schaul T, Quan J, Antonoglou I, Silver D. Prioritized experience replay. In: Proceedings of the 4th International Conference on Learning Representations. 2016, 1–23
[44]
Lillicrap T P, Hunt J J, Pritzel A, Heess N, Erez T, Tassa Y, Silver D, Wierstra D. Continuous control with deep reinforcement learning. In: Proceedings of the 4th International Conference on Learning Representations. 2016, 1–14
[45]
Foerster J N, Assael Y M, De Freitas N, Whiteson S. Learning to communicate with deep multi-agent reinforcement learning. In: Proceedings of the 30th International Conference on Neural Information Processing Systems. 2016, 2145–2153
[46]
Park S W, Boukerche A, Guan S. A novel deep reinforcement learning based service migration model for mobile edge computing. In: Proceedings of the 24th IEEE/ACM International Symposium on Distributed Simulation and Real Time Applications (DS-RT). 2020, 1–8
[47]
Ma L, Yi S, Li Q. Efficient service handoff across edge servers via docker container migration. In: Proceedings of the 2nd ACM/IEEE Symposium on Edge Computing. 2017, 11
[48]
Liu C, Tang F, Hu Y, Li K, Tang Z, Li K . Distributed task migration optimization in MEC by extending multi-agent deep reinforcement learning approach. IEEE Transactions on Parallel and Distributed Systems, 2021, 32( 7): 1603–1614
[49]
Liu N, Li Z, Xu J, Xu Z, Lin S, Qiu Q, Tang J, Wang Y. A hierarchical framework of cloud resource allocation and power management using deep reinforcement learning. In: Proceedings of the 37th IEEE International Conference on Distributed Computing Systems. 2017, 372–382

Acknowledgements

This work was supported by the Basic Science (Natural Science) Research Project of Colleges and universities in Jiangsu Province (22KJB520017).

RIGHTS & PERMISSIONS

2024 Higher Education Press
AI Summary AI Mindmap
PDF(5072 KB)

Accesses

Citations

Detail

Sections
Recommended

/