RS-DRL-based offloading policy and UAV trajectory design in F-MEC systems
Yang Yulu , Xu Han , Jin Zhu , Song Tiecheng , Hu Jing , Song Xiaoqin
›› 2025, Vol. 11 ›› Issue (2) : 377 -386.
RS-DRL-based offloading policy and UAV trajectory design in F-MEC systems
For better flexibility and greater coverage areas, Unmanned Aerial Vehicles (UAVs) have been applied in Flying Mobile Edge Computing (F-MEC) systems to offer offloading services for the User Equipment (UEs). This paper considers a disaster-affected scenario where UAVs undertake the role of MEC servers to provide computing resources for Disaster Relief Devices (DRDs). Considering the fairness of DRDs, a max-min problem is formulated to optimize the saved time by jointly designing the trajectory of the UAVs, the offloading policy and serving time under the constraint of the UAVs' energy capacity. To solve the above non-convex problem, we first model the service process as a Markov Decision Process (MDP) with the Reward Shaping (RS) technique, and then propose a Deep Reinforcement Learning (DRL) based algorithm to find the optimal solution for the MDP. Simulations show that the proposed RS-DRL algorithm is valid and effective, and has better performance than the baseline algorithms.
Flying mobile edge computing / Task offloading / Reward shaping / Deep reinforcement learning
| [1] |
|
| [2] |
|
| [3] |
|
| [4] |
|
| [5] |
|
| [6] |
|
| [7] |
|
| [8] |
|
| [9] |
|
| [10] |
|
| [11] |
|
| [12] |
|
| [13] |
|
| [14] |
|
| [15] |
|
| [16] |
|
| [17] |
|
| [18] |
|
| [19] |
|
| [20] |
|
| [21] |
|
| [22] |
|
| [23] |
|
| [24] |
|
| [25] |
|
| [26] |
|
| [27] |
|
| [28] |
|
| [29] |
|
| [30] |
|
| [31] |
|
| [32] |
|
| [33] |
|
| [34] |
|
| [35] |
|
| [36] |
|
| [37] |
|
| [38] |
|
| [39] |
|
| [40] |
|
| [41] |
|
/
| 〈 |
|
〉 |