AI-POWERED CLINICAL DECISION SUPPORT SYSTEMS (CDSS): CREATING A NEW FORM OF DIAGNOSTIC DEPENDENCE IN PRIMARY CARE
Abstract
This paper critically examines diagnostic dependence – the tendency of clinicians to over-rely on AI-driven clinical decision support systems (CDSS) – in primary care settings worldwide. We define diagnostic dependence as a form of automation bias where clinicians accept machine guidance as a heuristic substitute for careful judgment [1][2]. The study explores causes (e.g. cognitive factors, workflow pressures, lack of experience), mechanisms (e.g. reduced vigilance, "cognitive offloading"), and consequences (new error types, skill erosion). A systematic review of human factors literature reveals that CDSS can introduce automation bias (AB), which arises when users "over-rely on decision support, reducing vigilance" [1]. Key mediators include clinician trust and confidence in the AI, individual cognitive style, workload and task complexity [3][4]. Drawing on recent global case studies and surveys, we document diagnostic-dependence concerns in diverse health systems: for example, surveys in the U.S. (where 66% of physicians use health-AI by 2024 [5]) show enthusiasm tempered by calls for training and oversight; focus groups in the U.K. report GP worries about accuracy and deskilling [6]; and studies in Saudi Arabia and China reveal usage rates (~30%) and common fears of undermining clinical autonomy [7][8]. We compare system factors – such as interface design, regulatory regimes, and education policies – that shape diagnostic dependence. Finally, we discuss ethical and policy implications: mitigating over-reliance via clinician training in AI literacy, workflow redesign, accountability frameworks, and improved AI explainability and governance. Drawing on the latest research and expert guidelines (e.g. WHO's call for careful oversight [9]), we conclude with evidence-based recommendations to balance AI augmentation with human expertise, ensuring safe and effective care worldwide.