Decoding the Black Box: Making AI Heart Diagnostics Transparent
Source PublicationComputers in Biology and Medicine
Primary AuthorsManimaran, Peimankar, Puthusserypady et al.

Artificial intelligence is revolutionising how we detect heart disease from electrocardiograms (ECGs), but deep learning models often suffer from the 'black box' problem—they produce high-accuracy results without explaining how they arrived at them. To address this, researchers conducted a systematic literature review spanning from January 2018 to September 2024.
The findings uncover a significant gap in the field. Out of 6,448 studies utilising machine learning and deep learning for heart disease classification, only 51 integrated Explainable AI (XAI) architectures. While researchers are employing conventional tools such as SHAP and Saliency Maps to visualising model behaviour, the review identified 16 different deep learning architectures and eight novel techniques in use.
However, the path to reliable clinical adoption is obstructed by technical hurdles. The study concludes that current methods suffer from inconsistent explainability, a lack of standardised metrics, and difficulty in visualising temporal dependencies—how heart signals evolve over time. For AI to become a trusted partner in cardiology, future research must prioritise creating rigorous benchmarks and resolving data standardisation issues.