Neuroscience
Balanced Inhibition Keeps Neural Networks Learning Flexibly
Original Authors: Cecchini, Roxin

In the brain's neural networks, individual neurons process information by integrating signals from many others. This distributed processing is fundamental to how we learn, enabling mechanisms like Hebbian plasticity to associate sensory inputs with specific internal states, as seen in feedforward structures such as the CA1 region of the hippocampus. By modifying synaptic weights through Hebbian rules, sensory inputs can subsequently elicit outputs that consistently reflect their corresponding internal states. When input and output patterns are uncorrelated, this approach allows for the encoding of a large number of distinct associations, enabling efficient memory storage.
However, our research uncovered a critical limitation. We found that when output patterns become weakly correlated with input patterns through the intrinsic feedforward network's connectivity, the Hebbian rule preferentially strengthens synaptic weights shared across patterns. This leads to a detrimental 'freezing' of the network's structure, causing outputs to become highly correlated over time and effectively reducing its capacity to store diverse associations and limiting its flexibility in learning.
To overcome this challenge, we investigated the role of locally balanced inhibition, a mechanism that has been shown to be a key feature of cortical circuits in-vivo. As lead author Cecchini notes in the paper, "By dynamically regulating inhibitory input, locally balanced inhibition prevents the over-strengthening of shared weights, restoring the network's ability to maintain robust and flexible learning." This intervention successfully thwarts the 'freezing' effect by counteracting the undesired correlations, restoring the network's capacity for robust and flexible learning. This discovery not only provides a powerful solution to a fundamental limitation in associative learning models but also underscores the importance of inhibitory mechanisms in enabling the brain's efficient and adaptive information processing capabilities, offering insights into how biological networks maintain their remarkable capacity for associative learning.