Task Scheduling in Edge-Cloud Computing: A Comparative Study of Reinforcement Learning with Heuristic Methods
This work evaluates the performance of a Q-learning-based task scheduling algorithm within an edge computing environment, using the Montage workflow as a case study. Through comprehensive simulations, the Q-learning algorithm is compared against traditional MinMin and MaxMin heuristics across key performance metrics: overall workflow task completion time, energy consumption, and resource utilization. The results demonstrate that Q-learning significantly outperforms conventional approaches, especially at an optimal learning rate, by dynamically adapting to system states. These findings underscore the potential of reinforcement learning for efficient and adaptive task scheduling in edge computing environments, offering a promising pathway for tackling the challenges of dynamic and distributed workflows.