Abstract
The integration of artificial intelligence (AI) into clinical decision-making has the potential to improve diagnostic accuracy, treatment recommendations, and patient outcomes. However, the increasing reliance on machine learning (ML) in healthcare has revealed deep-rooted algorithmic biases that may exacerbate health disparities across demographic groups. This paper explores strategies to mitigate such biases through fairness-aware training objectives and post-hoc calibration techniques. We propose a framework combining group fairness constraints during training and recalibration methods to enhance equity across ethnic, gender, and age groups. Through empirical analysis on two clinical datasets, we demonstrate improved fairness without significant trade-offs in accuracy. This work contributes to the growing field of ethical AI by highlighting scalable interventions for real-world clinical systems.
View more >>