Assessing the Role of Explainable AI in Improving Trust and Transparency in Data-Driven Decision Systems
Abstract
The increasing reliance on data-driven decision systems across critical sectors—such as healthcare, finance, and criminal justice—has elevated concerns regarding trust and transparency. While traditional AI models have demonstrated high predictive performance, they often function as "black boxes," obscuring the rationale behind their outputs. Explainable Artificial Intelligence (XAI) has emerged as a promising avenue to address these challenges by providing human-understandable justifications for algorithmic decisions. This paper explores the role of XAI in fostering trust and enhancing transparency in AI-powered systems. Through an analysis of literature and recent developments, the study highlights both the benefits and limitations of current XAI techniques. Diagrams and tables illustrate system-level interactions and comparative performance metrics, offering a nuanced perspective on where XAI stands and what is needed for future progress.