Abstract
Lifelong learning in autonomous systems demands the ability to acquire new knowledge over time without compromising previously learned information—a challenge known as catastrophic forgetting. This paper explores dynamic knowledge distillation strategies that enable continual learning in neural models deployed in autonomous systems. By leveraging teacher-student architectures, selective memory replay, and adaptive regularization, the proposed framework ensures knowledge retention and optimal adaptation to new tasks. Through comparative evaluations on benchmark datasets, the approach demonstrates marked improvements in accuracy and task retention over existing lifelong learning techniques.
View more >>