Back to Top

Paper Title

A Brief Review of Explainable Artificial Intelligence in Healthcare

Authors

Hossein Moosaei
Hossein Moosaei
Roohallah Alizadehsani
Roohallah Alizadehsani
Bilal Alatas
Bilal Alatas
Zahra Sadeghi
Zahra Sadeghi
Mehmet Akif CIFCI
Mehmet Akif CIFCI
Samina Kausar
Samina Kausar
Rizwan Rehman
Rizwan Rehman
Priyakshi Mahanta
Priyakshi Mahanta
Priyakshi Mahanta
Priyakshi Mahanta
Ammar Almasri
Ammar Almasri
Ammar Almasri
Ammar Almasri
Sadiq Hussain
Sadiq Hussain

Article Type

Research Article

Journal

ArXiv.org

Research Impact Tools

Issue

| Page No : 1-23

Published On

April, 2023

Downloads

Abstract

XAI refers to the techniques and methods for building AI applications which assist end users to interpret output and predictions of AI models. Black box AI applications in high-stakes decision-making situations, such as medical domain have increased the demand for transparency and explainability since wrong predictions may have severe consequences. Model explainability and interpretability are vital successful deployment of AI models in healthcare practices. AI applications' underlying reasoning needs to be transparent to clinicians in order to gain their trust. This paper presents a systematic review of XAI aspects and challenges in the healthcare domain. The primary goals of this study are to review various XAI methods, their challenges, and related machine learning models in healthcare. The methods are discussed under six categories: Features-oriented methods, global methods, concept models, surrogate models, local pixel-based methods, and human-centric methods. Most importantly, the paper explores XAI role in healthcare problems to clarify its necessity in safety-critical applications. The paper intends to establish a comprehensive understanding of XAI-related applications in the healthcare field by reviewing the related experimental results. To facilitate future research for filling research gaps, the importance of XAI models from different viewpoints and their limitations are investigated.

View more >>

Uploded Document Preview