Brain-Inspired 'Reservoir' Computers Boost Power by Self-Regulating
Source PublicationNature Communications
Primary AuthorsSrinivasan, Plenz, Girvan

Reservoir computers (RCs) offer a fascinating, lightweight alternative to the energy-hungry deep learning models that currently dominate artificial intelligence. Unlike traditional networks where every connection is painstakingly adjusted, RCs utilise a fixed, randomised internal structure—a 'reservoir'—and only train the final output connections. While this simplifies the learning process, these systems remain notoriously sensitive to their initial setup, particularly the hyperparameter settings that govern how neurons connect and activate.
A critical factor often overlooked in RCs is the precise ratio between excitatory (stimulating) and inhibitory (suppressing) signals. In biological brains, this dynamic equilibrium is fundamental to function. Typically, RCs keep this ratio fixed. However, new research indicates that these networks actually perform best when they are balanced or slightly dominated by inhibition, rather than being overwhelmed by excitation.
Taking this bio-inspired concept further, the researchers developed a self-adapting mechanism. Instead of relying on manual tuning, the network locally adjusts its own excitatory-inhibitory balance to maintain specific activity levels, or target firing rates. This dynamic approach reduced the need for complex manual adjustments and delivered remarkable results: performance gains of up to 130% in memory capacity and time-series prediction. Furthermore, introducing heterogeneity—variety in the firing targets—made the systems even more robust, suggesting that self-regulation could be a key design principle for future neural computation.