Why is it so hard to remember just a few objects for 1 second?

Visual working memory capacity is severely limited and related to important cognitive factors, like fluid intelligence. But what makes it so limited? We find that limits are caused not by an upper bound on how many items can be remembered, but by stochastic, noisy memories for all items.
Published in Social Sciences
Why is it so hard to remember just a few objects for 1 second?
Like

We often feel as if we have a complete representation of the visual scene in front of us, such that if we closed our eyes, we could remember most of what we were looking at. But this intuition is wrong: if you tried to hold in mind just a few simple objects — say, 8 differently colored circles — for just 1 second, you would perform very poorly when asked to reproduce the color of one of the circles, even though 8 colored circles is much simpler than a real scene. Such severe visual working memory capacity limits are interesting not only because of how different they seem from our experience of the world, but also because visual working memory capacity is strongly related to educational attainment, fluid intelligence and other important cognitive skills.

Why do people perform so poorly when asked to remember just a few simple objects? The dominant theories argue that this occurs because working memory is fundamentally limited to representing, at most, 3 to 4 objects. These theories say that when we try to remember 8 objects, we simply lose access to 4-5 of them, as though they were never seen or processed by our visual system at all. 

While models like this where items completely disappear, leaving no trace, are popular in visual working memory, they are nearly unheard of in closely related domains, like perception and long-term memory. Instead, most perception and memory researchers are guided by signal detection theory, which suggests that our memories, and our decisions about our memories, are inherently stochastic and noisy. Sometimes our memory is strong and we are highly confident; sometimes it is weak and we feel as though we have no information at all. The important difference is that memories are continuous in strength —  we have at least some information about all items we have seen, and we don’t completely lose all information about any items.

Memory for a colored object -- like the color of a couch -- may best be thought of like this colorful abacus: an entire pattern of activity over many different color ‘channels’.

Our paper expands this signal detection approach to visual working memory. Using a computational model and 13 different experiments, we show how an extremely straightforward signal detection approach can account for existing visual working memory data without any need to appeal to people ‘guessing’ or representing only 3-4 items. We also show how this new model can make novel predictions that are inconsistent with previously dominant theories guiding working memory research over the past decade (Schurgin et al. 2020). We suggest that rather than thinking of memory as all-or-none, with some items lost completely, we should think of memory for color, for example, as consisting of a noisy pattern of neural activity across a large number of color ‘channels’. This perspective provides a natural connection to neural models based on population coding, in addition to connecting straightforwardly to signal detection models of perception and long-term memory. An intriguing implication of our model is that perception, working memory, and long-term memory are more related than previously theorized. The model is explained in more detail with an interactive tutorial here.

Interestingly, we derived this model in, effectively, one afternoon — a rare example of a ‘light bulb’ moment in science. Once we realized we could conceive of continuous reproduction as a 360-alternative-forced-choice task, and thought about standard signal detection models of such tasks, we quickly noticed that in a continuous color space, it would not be just the studied color that would feel more familiar, but also similar colors, too —  and an hour later we made the connection to the exponential-like fall-off in similarity that results in the core of the model. This work thus represents a rare ‘perfect collaboration’: Each of the 3 of us had unique information, that when put together resulted in a completely new way of thinking about memory. 

Paper: 

Schurgin, M. W., Wixted, J. T., and Brady, T.F. (2020). Psychophysical Scaling Reveals a Unified Theory of Visual Memory Strength. Nature Human Behaviour, https://doi.org/10.1038/s41562-020-00938-0.

Please sign in or register for FREE

If you are a registered user on Research Communities by Springer Nature, please sign in