Back to Top

Paper Title

Explainable AI for Compliance and Regulatory Models

Authors

Indra Reddy Mallela
Indra Reddy Mallela
Punit Goel
Punit Goel
Vishwasrao Salunkhe
Vishwasrao Salunkhe
Satendra Pal Singh
Satendra Pal Singh
Ojaswin Tharan
Ojaswin Tharan
Sneha Aravind
Sneha Aravind

Keywords

  • Explainable AI
  • compliance models
  • regulatory frameworks
  • transparency
  • interpretability
  • accountability

Article Type

Research Article

Research Impact Tools

Issue

Volume : 11 | Issue : 4 | Page No : 319–339

Published On

December, 2020

Downloads

Abstract

The increasing complexity of compliance and regulatory frameworks across industries demands innovative solutions for managing and interpreting large volumes of data. Explainable Artificial Intelligence (XAI) offers a promising approach by providing transparent and interpretable AI models that can be utilized for compliance and regulatory decision-making. Traditional AI systems, often viewed as "black boxes," have been met with scepticism due to their opacity, especially in high-stakes domains like finance, healthcare, and legal sectors, where accountability and trust are paramount. XAI addresses these challenges by making the decision-making process more transparent, enabling stakeholders to understand the logic behind AI-driven recommendations and actions. In regulatory environments, XAI can be used to explain the rationale behind risk assessments, fraud detection, or legal interpretations, thus ensuring compliance with laws and policies. Moreover, the integration of XAI into compliance models enhances auditability and traceability, providing regulators and auditors with the tools to validate and verify the adherence to standards. This transparency is crucial for building trust in AI systems and fostering collaboration between human decision-makers and AI tools.

View more >>

Uploded Document Preview