Hierarchical Deep Learning Frameworks Enabling Dynamic Task Allocation and Real-Time Path Optimization in Mobile Robotic Fleets
Abstract
This study proposes a hierarchical deep learning framework designed to address dynamic task allocation and real-time path optimization in mobile robotic fleets operating in variable and resource-constrained environments. The model incorporates layered decision-making using neural network architectures to perform decentralized control while allowing central policy intervention during uncertainty. We investigate reinforcement learning and convolutional layers embedded within hierarchical structures to optimize both task distribution and movement paths of heterogeneous robotic agents. Experimental simulations demonstrate significant improvements in task completion rates, response time, and energy efficiency when compared to traditional swarm-based and rule-based systems.