Explainable AI in Maternal Health: Utilizing XGBoost and SHAP Values for Enhanced Risk Prediction and Interpretation
DOI:
https://doi.org/10.54938/ijemdcsai.2025.04.1.419Keywords:
Maternal Health, XGBoost, Risk Stratification, Explainable AI, Machine LearningAbstract
This research investigates the integration of Explainable Artificial Intelligence (XAI) in maternal health risk prediction, with a focus on improving the transparency and clinical utility of predictive models. Maternal mortality persists as a global challenge, disproportionately affecting developing nations where healthcare systems often rely on opaque predictive tools trained on limited datasets. To address these gaps, this study analyzes a comprehensive dataset spanning clinical, physiological, and historical health metrics, applying both traditional and advanced machine learning models. By incorporating SHapley Additive exPlanations (SHAP) value analysis, the interpretability of risk predictions was enhanced while maintaining high diagnostic accuracy. The findings indicate that the XGBoost model achieved an impressive accuracy of 96.36%, with body mass index and preexisting diabetes emerging as the most significant risk determinants. Clinical insights from highly-renowned healthcare providers were actively sought during this study to contextualize the model’s implications within real-world clinical practice. These insights enable healthcare providers to prioritize high-impact variables when designing interventions, bridging the gap between algorithmic outputs and actionable clinical strategies.
Downloads
Published
How to Cite
Issue
Section
Categories
License
Copyright (c) 2025 International Journal of Emerging Multidisciplinaries: Computer Science & Artificial Intelligence

This work is licensed under a Creative Commons Attribution 4.0 International License.