Deep Graph-based Reinforcement Learning for Joint Cruise Control and Task Offloading for Aerial Edge Internet-of-Things (EdgeIoT)
Ref: CISTER-TR-220602 Publication Date: 2022
Deep Graph-based Reinforcement Learning for Joint Cruise Control and Task Offloading for Aerial Edge Internet-of-Things (EdgeIoT)Ref: CISTER-TR-220602 Publication Date: 2022
This paper puts forth an aerial edge Internet-of-Things (EdgeIoT) system, where an unmanned aerial vehicle (UAV) is employed as a mobile edge server to process mission-critical computation tasks of ground Internet-of-Things (IoT) devices. When the UAV schedules an IoT device to offload its computation task, the tasks buffered at the other unselected devices could be outdated and have to be cancelled. We investigate a new joint optimization of UAV cruise control and task offloading allocation, which maximizes tasks offloaded to the UAV, subject to the IoT device’s computation capacity and battery budget, and the UAV’s speed limit. Since the optimization contains a large solution space while the instantaneous network states are unknown to the UAV, we propose a new deep graph-based reinforcement learning framework. An advantage actor-critic (A2C) structure is developed to train the real-time continuous actions of the UAV in terms of the flight speed, heading, and the offloading schedule of the IoT device. By exploring hidden representations resulting from the network feature correlation, our framework takes advantage of graph neural networks (GNN) to supervise the training of UAV’s actions in A2C. The proposed GNN-A2C framework is implemented with Google Tensorflow. The performance analysis shows that GNN-A2C achieves fast convergence and reduces considerably the task missing rate in aerial EdgeIoT.
Published in IEEE Internet of Things Journal (IOTJ), IEEE.
Record Date: 8, Jun, 2022