Medical AI Transparency Remains Worryingly Low Despite FDA Guidelines
Source Publicationnpj Digital Medicine
Primary AuthorsMehta, Komanduri, Bhadouriya et al.

Artificial intelligence is rapidly transforming healthcare, yet a new analysis suggests we often lack critical information about the algorithms diagnosing and treating patients. Researchers reviewed 1,012 FDA-approved medical devices featuring AI or machine learning, spanning from 1970 to December 2024, to assess their public transparency.
Using a novel metric called the AI Characteristics Transparency Reporting (ACTR) score—which evaluates disclosure across 17 categories—the team found an average score of just 3.3. Although the FDA introduced ‘Good Machine Learning Practice’ principles in 2021 to improve standards, adherence has been sluggish. Scores improved by a modest 0.88 points following the guidelines, suggesting voluntary measures may be insufficient.
The specific data gaps are stark: nearly half of the devices reviewed did not report a clinical study, and over half failed to share any performance metrics. These findings emphasise a disconnect between technological adoption and regulatory clarity. To ensure trust in these powerful tools, the authors argue that we urgently need enforceable standards rather than optional recommendations.