ENHANCING HEALTHCARE DIAGNOSTICS THROUGH EXPLAINABLE AI MODELS
Keywords:
explainable AI, healthcare diagnostics, clinician trust, model interpretability, artificial intelligence, XAI, diagnostic decision-makingAbstract
Reliable and transparent diagnostic tools are essential to make progress in health care. Though artificial intelligence (AI) models have greatly improved diagnostic ability, they mostly act as a “black box” and have been a barrier for clinical application because of absence of interpretability. This challenge has led to a proliferation of Explainable AI (XAI) methods with the promise of increased transparency and trust of clinicians, however they have been poorly evaluated in the field. The goal of this study was to assess the added value of explainable AI models for healthcare diagnostics over traditional non-explainable models with respect to clinician trust, interpretability, and improvement of diagnostic decision-making. A comparative study design was employed and secondary datasets for different diagnostic domains (radiology, derma-tology, cardiology) were used between 2021 and 2025. Baselines AI models were carried out using available XAI methods such as SHAP, LIME, and saliency maps. The evaluation used accuracy, sensitivity, specificity, and clinician trust, obtained via questionnaire and structured interview. The results show that the non-explainable model that provided 94.3% accuracy also slightly outperformed the explainable model that provided 92.7% accuracy, however, explainable models yielded significantly higher clinician trust scores (91.5% vs. 68.2%) and interpretability ratings. The case examples showed that XAI outputs resulted in improved diagnostic decisions, especially in controversial clinical situations supporting that consideration of small versus clinical usable trade-offs in performance is enough to demonstrate benefit. Including explain ability in AI-based diagnosis tools improves not just ethical and legal acceptance, but also the quality of clinical decisions. Attention within future research should also be focused on developing best performing yet most transparent dynamic models to move healthcare AI down the trust line.