Privacy-First Explainable Federated Learning with Zero-Trust AI Infrastructure for Proactive Healthcare Decision Support
Abstract
Healthcare AI is increasingly deployed in decision support and patient-facing workflows, but scale-up is limited by privacy constraints on protected health information (PHI), expanding attack surfaces in hybrid delivery models, and weak transparency of complex models. This paper articulates ZT-XFL, a reference architecture that integrates federated learning (FL) with differential privacy (DP) and explainable AI (XAI) under a zero trust architecture (ZTA). The control plane enforces explicit verification, least-privilege authorization, and comprehensive audit logging as defined in ZTA guidance [1], while the training plane coordinates institution-local optimization (FedAvg-style aggregation) [3] and optionally applies DP-SGD and privacy budgeting to bound leakage from updates [6]. Secure aggregation is incorporated to prevent the coordinator from learning individual client updates [7]. The governance plane binds post-hoc explanations (LIME/SHAP) to immutable model versions and inference events [12], [13], reflecting the view that clinical explainability requirements are context-dependent and must be assessed alongside validation and system role [16]. We formalize a threat model spanning endpoint compromise, insider misuse, gradient-based inference, supply-chain risks, and poisoning of federated updates; and we map each threat to enforceable controls across identity, device posture, workload attestation, update screening, and explanation-access policy [1], [2]. Rather than presenting institution-specific results, we provide a reproducible evaluation protocol that jointly measures utility and calibration, privacy loss (ε, δ), security control coverage, explanation stability, and robustness against adversarial updates and inference attacks [9], [10], [11], enabling benchmarking on multi-site healthcare tasks without centralizing PHI.
References
[2] S. Rose, "Planning for a Zero Trust Architecture: A Planning Guide for Federal Administrators," NIST CSWP 20, May 2022. doi:10.6028/NIST.CSWP.20.
[3] H. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, "Communication-Efficient Learning of Deep Networks from Decentralized Data," Proc. AISTATS, PMLR 54, pp. 1273–1282, 2017.
[4] J. Xu, B. S. Glicksberg, C. Su, P. Walker, J. Bian, and F. Wang, "Federated Learning for Healthcare Informatics," Journal of Healthcare Informatics Research, vol. 5, no. 1, pp. 1–19, 2021. doi:10.1007/s41666-020-00082-4.
[5] N. Rieke et al., "The Future of Digital Health with Federated Learning," npj Digital Medicine, vol. 3, art. 119, 2020. doi:10.1038/s41746-020-00323-1.
[6] M. Abadi et al., "Deep Learning with Differential Privacy," Proc. ACM CCS, pp. 308–318, 2016. doi:10.1145/2976749.2978318.
[7] K. Bonawitz et al., "Practical Secure Aggregation for Privacy-Preserving Machine Learning," Proc. ACM CCS, 2017. ePrint:2017/281.
[8] C. Dwork, F. McSherry, K. Nissim, and A. Smith, "Calibrating Noise to Sensitivity in Private Data Analysis," Theory of Cryptography Conference (TCC), LNCS 3876, pp. 265–284, 2006. doi:10.1007/11681878_14.
[9] U.S. Department of Health & Human Services, "HIPAA Guidance Materials," accessed Dec. 2025. https://www.hhs.gov/hipaa/for-professionals/privacy/guidance/index.html.
[10] R. Shokri, M. Stronati, C. Song, and V. Shmatikov, "Membership Inference Attacks Against Machine Learning Models," IEEE S&P, 2017. doi:10.1109/SP.2017.41.
[11] M. Fredrikson, S. Jha, and T. Ristenpart, "Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures," Proc. ACM CCS, 2015. doi:10.1145/2810103.2813677.
[12] M. T. Ribeiro, S. Singh, and C. Guestrin, ""Why Should I Trust You?": Explaining the Predictions of Any Classifier," Proc. ACM SIGKDD, 2016. doi:10.1145/2939672.2939778.
[13] S. M. Lundberg and S.-I. Lee, "A Unified Approach to Interpreting Model Predictions," NeurIPS, 2017.
[14] P. Blanchard, E. M. El Mhamdi, R. Guerraoui, and J. Stainer, "Byzantine-Tolerant Machine Learning (Krum)," NeurIPS, 2017. arXiv:1703.02757.
[15] P. Kairouz et al., "Advances and Open Problems in Federated Learning," Foundations and Trends in Machine Learning, vol. 14, no. 1–2, pp. 1–210, 2021. doi:10.1561/2200000083.
[16] J. Amann et al., "To Explain or Not to Explain?—AI Explainability in Clinical Decision Support Systems," PLOS Digital Health, 2022. doi:10.1371/journal.pdig.0000016.
[17] World Health Organization, "Ethics and Governance of Artificial Intelligence for Health: WHO Guidance," 2021. ISBN:9789240029200.
Copyright (c) 2025 Rohith Vangalla, Kalyan Chakravarthy

This work is licensed under a Creative Commons Attribution 4.0 International License.
ISSN 

