AI DIAGNOSTICS ARE REINFORCING DIAGNOSTIC COLONIALISM: THE CASE OF LOW-INCOME COUNTRY DATA BIAS
Abstract
Contemporary machine learning diagnostic systems predominantly leverage data from high-income contexts, systematically marginalizing low-income country populations. This comprehensive analysis elucidates how such diagnostic colonialism emerges from entrenched data biases, synthesizing literature from AI fairness, decolonial theory, and global health equity. Comparative case studies augmented by WHO and World Bank statistics quantify resource disparities and demonstrate context-specific failures correlated with colonial legacies. We propose a multidimensional policy framework emphasizing data sovereignty and algorithmic accountability. Documented failures underscore the urgency of reform, while cost-benefit analysis reveals that biased systems incur $10.3B annual losses versus $37.6B potential savings from equitable approaches. Successful mitigations demonstrate viable pathways toward diagnostics that genuinely augment capabilities across resource settings.