Abstract
The increasing reliance on machine learning models in credit risk assessment has prompted a critical need for transparency and interpretability in financial decision-making. Explainable Artificial Intelligence (XAI) has emerged as a key enabler, addressing concerns of accountability, trust, and regulatory compliance. This paper presents a comparative analysis of interpretable ML models—such as logistic regression, decision trees, and SHAP-enhanced ensemble methods—employed for credit risk prediction. The objective is to understand their predictive power, ease of interpretation, and practical applicability within financial institutions. Our findings show that while complex models often outperform in terms of raw predictive accuracy, simpler, interpretable models provide clearer, actionable insights and higher user trust, particularly in regulatory and consumer-facing applications. Integrating explanation tools like SHAP further enhances interpretability, offering a balance between performance and explainability.
View more >>