Abstract
The rapid expansion of cloud computing has introduced complex scheduling challenges, especially in heterogeneous environments where computational resources vary significantly. Traditional and heuristic-based schedulers often fall short in adapting to dynamic workloads and resource availability. This paper proposes a Deep Reinforcement Learning (DRL) based scheduling framework that learns optimal allocation policies over time to improve resource utilization, minimize latency, and enhance energy efficiency. Simulations on varied workloads show that DRL outperforms conventional algorithms in scalability and responsiveness.
View more >>