Random fluctuations in neural connections should erase our memories. Why don't they?

About fifteen years ago, I started reading reports about neural connections (spines and synapses) showing spontaneous large-scale random fluctuations. "This can never be," I thought, "because that's how we erase memories in neural network models to simulate amnesia! It will surely not be replicated."
Published in Social Sciences
Random fluctuations in neural connections should erase our memories. Why don't they?
Like

"But they did replicate it!" It is now routinely found that even on a timescale of minutes, synapses in a living animal can be observed to change size considerably and without apparent cause. Roughly 20% of the neural connections in the brain are very small and may appear and disappear within a day. But also the larger synapses are changing randomly, albeit at a slower pace. If this is the mechanism used to simulate amnesia in our neural network models, why do we still remember anything at all?

To save our memory from constant erasure, one needs to only make the assumption that the random neural fluctiations also give rise to a certain proportion of large spines (and synapses) that remain stable for a long time (Figure 1.a). These large spines change a lot less than the small spines and it is these that are assumed to encode our long-term memories and knowledge-of-the-world.

To analyze whether this mechanism could in fact work and give plausible predictions, I implemented this mechanism in a mathematical model. While doing so, to my surprise, it became clear that the massive random fluctuations in neural connections may in fact implement a type of long-term consolidation mechanism. Let me explain: A newly learned neural memory is bit like a very young forest. There are many weak and small seedling trees (cf. very small spines, Figure 1.b). Over a long time period, most of the seedlings will wither, while a few will grow out to eventually become large trees (cf. large 'Mushroom' spines, Figure 1.c) that may be around for a long time.  

Figure 1
Figure 1

The details of the model are described in the Methods section of the paper (with even more details online), but it is perfectly possible to understand the mechanism and its implications intuitively, without understanding any of the math.

What I like about the model is that it helps us to understand what at first sight seems to be a great weakness (i.e., the constant random 'noise' in the brain's connections) as a possibly powerful mechanism. Long-term memories and our knowledge-of-the-world can be encoded by relatively few but strong and stable neural connections, leaving many neural resources to encode future memories. As I show in the paper, it also gives a plausible account of the shape of learning and forgetting curves, such as Hermann Ebbinghaus' famous forgetting curve from 1885. And it explains why with loss of existing memories (retrograde amnesia), mainly the most recent memories are lost while older memories are preserved. This was ffirst reported by Théodule Ribot in 1881, and it is, for example, commonly seen with progressive Alzheimer's Dementia. These patients no longer know what happened in the past few years, but may still vividly remember events from before that time. The model explains this because recent memories are still encoded largely with smaller connections and these are most susceptible to the damaging effects of the dementia. In the paper, I simulate how the resulting patterns of memory loss may span many years.

The model also gives a natural explanation why you quickly forget the things learned from a whole day of cramming for an exam: If you cram, you are planting many seedlings at once and you may eventually start running out of space, which will be reflected in less effective learning. Also, the new seedlings are quite weak and while they may last long enough to get you a B+ on the exam the next day, many will wither quickly. But if you space learning, you can keep planting new seedlings, each of which will have a chance to reach strong states. The resulting memory will therefore last longer. In psychology, this is known as the spacing advantage or as Jost's Law, dating from 1897.

So, the massive random fluctuations observed in brain's connections do not erase our memories (though they are a source of forgetting) but contribute to an efficient use of neural resources. The paper shows that they may also provide us with novel explanations for several important classic findings in memory psychology that date back to the nineteenth century but for which a neurobiological explanation thus far was lacking, including the spacing advantage, the shape of learning and forgetting, and the pattern of memory loss in retrograde amnesia. 

Please sign in or register for FREE

If you are a registered user on Research Communities by Springer Nature, please sign in

Subscribe to the Topic

Society
Humanities and Social Sciences > Society

Related Collections

With collections, you can get published faster and increase your visibility.

Retinal imaging and diagnostics

This Collection invites works providing insight into using novel or existing retinal imaging technologies in clinical applications or presents new or adaptive forms of these techniques.

Publishing Model: Open Access

Deadline: Apr 30, 2024