Computer Science & AI17 November 2025

New AI Model Uses 'Visual Attention' to Expose Deepfakes

Source PublicationScientific Reports

Primary AuthorsLal, Shiwani, Gandhi

Visualisation for: New AI Model Uses 'Visual Attention' to Expose Deepfakes
Visualisation generated via Synaptic Core

The ability to replace one person’s face with another’s using computer vision—known as deepfaking—is becoming alarmingly sophisticated. As these artificial manipulations mimic real data more closely, the line between truth and fiction blurs, creating significant societal anxiety regarding the legitimacy of digital content.

To counter this, researchers have engineered a robust deep learning model that acts as a digital detective. The process begins by extracting facial areas from video frames and running them through a pre-trained neural network called ResNeXt-50 to map out visual features. However, the real innovation lies in the addition of a 'visual attention' strategy.

Much like a human observer might squint to see a flaw, this mechanism directs the model’s focus toward specific artefacts—subtle digital distortions unique to deepfake modifications. When tested across different datasets, including Face Forensic++ for training and Celeb-DFv2 for independent testing, this attention-enriched model successfully outperformed existing methods, offering a promising tool for confirming the authenticity of videos.

Cite this Article (Harvard Style)

Lal, Shiwani, Gandhi (2025). 'New AI Model Uses 'Visual Attention' to Expose Deepfakes'. Scientific Reports. Available at: https://doi.org/10.1038/s41598-025-23920-0

Source Transparency

This intelligence brief was synthesised by The Synaptic Report's autonomous pipeline. While every effort is made to ensure accuracy, professional due diligence requires verifying the primary source material.

Verify Primary Source
deepfake detectioncomputer visionneural networks