Go Back Research Article June, 2025

HYBRID DEEP LEARNING ENSEMBLE FOR BRAIN TUMOR MRI CLASSIFICATION WITH VISUAL EXPLAINABILITY

Abstract

Accurate and early diagnosis of brain tumors is essential for effective treatment planning and improving patient outcomes. This study presents a robust and efficient deep learning-based ensemble framework for the classification of brain tumor MRI images into four categories: glioma, meningioma, pituitary, and no tumor. The proposed system integrates three distinct convolutional neural network architectures— a custom-designed CNN, VGG16, and ResNet101— to leverage the strengths of each model through ensemble learning. Extensive data preprocessing and augmentation techniques, including rotation, brightness adjustment, shear, and flipping, were employed to improve generalization and reduce overfitting. Each base model was trained on the augmented dataset using transfer learning and fine-tuning strategies. A majority voting scheme was used to combine the predictions from the individual models. The ensemble model achieved an impressive accuracy of 98% on the test set, outperforming individual models. Evaluation metrics such as precision, recall, F1-score, and confusion matrix confirmed the reliability of the system across all tumor categories. Furthermore, to ensure interpretability, Grad-CAM visualizations were applied to highlight salient regions in MRI scans influencing model decisions. This interpretability provides an added layer of trust and insight for medical practitioners. The proposed method offers a promising solution for automated brain tumor diagnosis and can be further extended for real-time clinical applications. Future work includes deploying this ensemble system in a clinical decision support tool and integrating additional explainable AI (XAI) methods for deeper insights.

Keywords

brain tumor classification deep learning ensemble learning cnn vgg16 resnet101 mri grad-cam explainable ai medical image analysis.
Document Preview
Download PDF
Details
Volume 4
Issue 1
Pages 205-226
ISSN 9339-1263