Research News
Less is more: Why sparse brain connections make learning more efficient
In the 2014 film “Lucy”, the protagonist gains superhuman intelligence by unlocking 100% of her brain’s potential. While this makes for thrilling cinema, it misrepresents how our brains actually work. The human brain has about 86 billion neurons, yet they are not densely connected – in fact, less than 1% of potential connections actually form. This "sparse connectivity" may seem inefficient, but a new study by Fruengel and Oberlaender, published in Frontiers in Neural Circuits, suggests that this apparent inefficiency might actually be a feature rather than a flaw.
We often imagine the brain as a mass of tightly interconnected neurons. However, research has shown that neurons in cortical networks are very sparsely connected; even neurons whose axons and dendrites overlap are highly unlikely to form a synaptic connection (see Udvary et al). But what is the functional relevance of such sparse connectivity? Scientists at our In Silico Brain Sciences Lab led by Marcel Oberlaender, challenge the preconception that more connections lead to better learning.
In conventional artificial neural networks (ANNs), sparse connectivity has been shown to impair information processing. But does this imply that the same is true in biological networks? Is the brain really working at a disadvantage due to its sparsity? Although ANNs were originally inspired by the brain, conventional ANNs differ significantly in their structural architecture from cortical networks. Using ANNs inspired by real cortical structures, the researchers found that sparse connectivity – rather than slowing down processing – actually enhances efficiency. Compared to densely connected networks, large, sparse and recurrent networks – akin to those in the brain – can learn faster, require less data, and adapt better when neurons misfire. Digging deeper into this surprising result, they discovered that dense ANNs distribute information across only a very small fraction of nodes, whereas sparse ANNs spread information more broadly, increasing robustness and adaptability. This effect is particularly pronounced in networks which mirror the neuronal cell types found in the cortex.
This insight reveals that, in fact, connectivity in the brain may be perfectly suited to efficient learning, and that using ANNs is a promising approach for investigating the relevance of even more complex features of brain connectivity. It also challenges the standard practice in artificial intelligence: building dense connections, under the assumption that this leads to better performance. Instead, the findings demonstrate that dense connections can actually slow down learning processes. This could have implications for the future design of recurrent artificial neural networks, making them more biologically inspired and presumably more computationally efficient. As first author Rieke Fruengel summarizes: “The brain’s sparse connectivity is not a limitation – it’s an optimization!”
The study was published in Frontiers in Neural Circuits on 13/03/2025. Read the publication.