Back to Top

Paper Title

INTERPRETABLE ARTIFICIAL INTELLIGENCE WITH EXPLAINABILITY AND ROBUSTNESS IN MEDICAL IMAGE CLASSIFICATION USING TOPOLOGICAL AND FRACTAL FEATURES

Keywords

  • Explainable AI (XAI)
  • Topological Data Analysis (TDA)
  • Fractal Dimension
  • Deep Learning
  • Convolutional Neural Networks (CNN)
  • Medical Imaging
  • Robustness
  • Pneumonia Detection
  • Out-of-Distribution Detection

Article Type

Research Article

Issue

Volume : 4 | Issue : 1 | Page No : 43-68

Published On

April, 2025

Downloads

Abstract

Deep learning models, particularly Convolutional Neural Networks (CNNs), have achieved remarkable accuracy in medical image analysis tasks like pneumonia detection from chest X-rays. However, their "black-box" nature and the potential brittleness of common explainability methods (e.g., saliency maps) hinder clinical trust and adoption. This paper proposes and evaluates a methodology for enriching CNNs with mathematically grounded global features derived from Topological Data Analysis (TDA) and Fractal Dimension (FD) analysis, aiming to provide complementary, more robust explanations. We integrate these features, extracted from intermediate layers of a pre-trained ResNet50 fine-tuned for pneumonia detection, with the CNN's own deep features. Our results show that while a simple MLP-based fusion significantly degraded performance (accuracy ~73%), an attention-based fusion mechanism successfully integrated the features, matching the high baseline accuracy (~96%) on the original dataset. The TDA and FD features themselves exhibit statistically significant differences between normal and pneumonia classes (FD p < 5e-7), providing quantitative structural and complexity-based insights which act as CNN-derived biometric markers differentiating the classes. Furthermore, we demonstrate the system's ability to effectively detect Out-of-Distribution (OOD) inputs (distinguishing real X-rays from unrelated images). Crucially, robustness analysis reveals that the fusion model exhibits greater prediction stability under common image perturbations (noise, rotation, blur) compared to the baseline CNN (20.7% vs. 24.0% average flip rate). We also observe that local explanations like Grad-CAM can be unstable under perturbation (SSIM ~0.42 for noise), suggesting that the global TDA/FD features contribute to more robust model reasoning. We conclude that integrating TDA and FD offers a promising direction for building more trustworthy and interpretable AI systems in medical imaging.

View more >>

Uploded Document Preview