Computer Science & AI9 January 2026

Brain-inspired AI: Teaching Computers to Think Like Spies

Source PublicationProceedings of the National Academy of Sciences

Primary AuthorsSu, Cai, Zhao et al.

Visualisation for: Brain-inspired AI: Teaching Computers to Think Like Spies
Visualisation generated via Synaptic Core

Imagine you are running a covert spy network in a hostile city. You have two ways to organise your agents. The first method is the 'Rookie Approach'. You give your agents a perfect, memorised map of the city. They know exactly where to go, provided nothing changes. But if a road is blocked or a safehouse is burned? They freeze. They cannot deviate from the plan because they only know the map, not the territory.

Now, consider the 'Veteran Approach'. You do not give these agents a map. Instead, you teach them the rules of survival. You teach them how to spot a tail, how to blend into crowds, and how to navigate by instinct. If a road is blocked, they simply duck into an alley. They don't need to be told what to do; the system is built to handle chaos.

Current artificial intelligence often behaves like the Rookie. It is brilliant when the conditions match its training, but fragile when things go wrong. The study in question proposes a shift towards the Veteran. By modelling the neural dynamics of the primate dorsal visual pathway—the brain's circuit for tracking motion and space—researchers created a system that prioritises adaptability over rote memorisation.

How Brain-inspired AI survives the chaos

The researchers constructed a neural model that mimics the specific firing patterns of biological neurons. In the spy analogy, this is like teaching the agents the fundamental tradecraft rather than just giving them a list of addresses. Because the model incorporates these biological rules, it can make decisions similar to a human without needing to view millions of training images first. It operates on instinct.

The team tested this by subjecting the model to digital 'noise' and damage—essentially cutting the phone lines or blinding the sensors. Standard networks often collapse under this pressure. This biological model, however, kept working. It demonstrated the same resilience found in living tissue, maintaining function even when parts of the circuit were degraded.

To sharpen this system, the team used a clever tuning method. They took functional MRI (fMRI) data from humans—scans showing which parts of the brain light up during tasks—and used it to adjust the model's parameters. Think of this as updating the spy's handbook based on real field reports from successful missions. If the biological brain relies heavily on a specific connection to solve a puzzle, the computer model is adjusted to rely on that connection too.

If we align the silicon parameters with biological reality, then the machine performs better. The results indicate that this neuroimaging-guided fine-tuning not only improved the model's accuracy but also reduced the computational effort needed to find the right settings. It narrows the search space significantly.

While this is a specific model of visual processing, the implications are broad. It suggests that the path to robust, reliable machines isn't just about more data or faster chips. It is about better architecture. By copying the homework of evolution, we may finally build systems that don't just calculate, but truly adapt.

Cite this Article (Harvard Style)

Su et al. (2026). 'Primate-informed neural network for visual decision-making. '. Proceedings of the National Academy of Sciences. Available at: https://doi.org/10.1073/pnas.2426883123

Source Transparency

This intelligence brief was synthesised by The Synaptic Report's autonomous pipeline. While every effort is made to ensure accuracy, professional due diligence requires verifying the primary source material.

Verify Primary Source
Neural dynamics model of primate dorsal visual pathwayBenefits of biologically plausible AI over conventional networksResilient SystemsBiomimetics