Go Back Research Article February, 2024
IACSE - International Journal of Artificial Intelligence and Machine Learning

An Empirical Study on the Effectiveness of Batch Normalization and Dropout in Stabilizing Deep Neural Network Training

Abstract

Stabilizing the training of deep neural networks (DNNs) remains a central challenge in deep learning. This study investigates the empirical effectiveness of two widely adopted regularization and normalization techniques—Batch Normalization (BN) and Dropout—in improving training stability and generalization. Using standard deep learning benchmarks (CIFAR-10, CIFAR-100, and MNIST) across various architectures (MLP, CNN, ResNet), we evaluate performance based on convergence speed, training loss oscillations, and final test accuracy. Our results show that Batch Normalization significantly reduces internal covariate shift and accelerates convergence, while Dropout adds robustness by mitigating overfitting. Interestingly, we observe that simultaneous use of BN and Dropout yields mixed results, suggesting interaction effects dependent on architecture depth and learning rate schedules. These findings offer insights into designing better training pipelines for deep networks.

Keywords

batch normalization dropout deep neural networks training stability generalization regularization convergence.
Document Preview
Download PDF
Details
Volume 5
Issue 1
Pages 1-8
ISSN 1421-8930