Like drinking from a firehose, our brains must sip carefully and selectively from the deluge of information that characterizes our ongoing experiences. In our paper, published in the current issue of Nature Human Behaviour, we propose a mathematical framework for characterizing how our brains transform and distill our complex ongoing experiences into usable memories. Our framework defines a way of representing complex experiences and memories as geometric shapes that reflect how they unfold over time, and how different parts of the experience (or memory) relate. We use this framework to characterize how people’s brains distort ongoing experiences in the here and now, in a way that reflects how they will remember “now” later.
Our fundamental approach is inspired by a rapidly growing sub-field of computer science called natural language processing. The goal of this research area is to develop tools for mining text and other linguistic data for potentially useful or informative patterns. One set of tools, called “text embedding models,” may be used to represent the conceptual content of text (individual words, sentences, entire documents, etc.) as “feature vectors”. Each feature vector is a long sequence of numbers that corresponds to a single point in a high-dimensional mathematical space. The idea is to assign conceptually related texts to nearby coordinates (or similar feature vectors) in the space.
In our paper, we computed text embeddings for annotations describing a thousand brief (2–4 second) segments in a roughly 50-minute excerpt from the pilot episode of the BBC television show Sherlock. The episode’s “shape” emerged when we treated the coordinates of each segment like a “connect-the-dots” puzzle by drawing lines between each successive segment’s annotations. We examined data collected in a previous study that had 17 participants watch the episode and then verbally recount what had happened, all while their brains were being imaged. When we computed text embeddings for each small bit of the transcript of each person’s recounting of the episode, and then used our connect-the-dots trick on the result, we were left with the “shapes” of how people remembered the episode they had watched. We could then compare the shape of the original episode with the shapes of how different people recounted the episode.
Several intriguing and potentially far-reaching findings emerged. First, we noticed that the basic features of everyone’s “memory shapes” looked grossly similar, and also matched the basic features of the episode’s shape. That told us that all 17 participants accurately recounted the major plot points of the episode. However, when we zoomed in to examine people’s memory shapes in more detail, we also noticed that each person appeared to distort the shape of the episode in a unique way. That told us that each participant was remembering, omitting, and/or distorting a unique set of low-level details about the episode. Our findings suggest that people’s memory systems place different importance on different types of information. Whereas everyone appeared to prioritize accurately remembering the high-level conceptual content of the episode, people differed substantially in how (or whether) they remembered low-level details.
We also looked at people’s brain responses as they were watching the episode. A network of brain regions called the anterior temporal system responded in a way that tracked with the episode’s shape—i.e., how the conceptual of the episode unfolded over time. A second network of brain regions, called the posterior medial system, tracked with the idiosyncratic ways that each participant would recount the episode later. In other words, this network seems to distort each person’s ongoing experiences. In our paper, we suggest that these two networks might work in coordination with each other to make sense of our ongoing experiences, distort them in a way that links what we are experiencing now with what we had experienced in the past, and encode these distorted versions of “now” into the memories that we rely on later.
One important advance that our study brings to the field of learning and memory is a fundamentally new approach to studying memory for “naturalistic” stimuli or experiences. Our text embedding framework can be used to study how our memory systems learn and remember from videos (as in the current study), as well as from other complex episodes like stories or real-life experiences. This is a marked departure from more traditional approaches, used widely over the past century, that test people’s memory for highly simplified (and potentially less true-to-life) stimuli such as sequences of words or images. By defining a general framework for building explicit models of complex naturalistic experiences and memories, our work provides researchers with a new suite of tools for studying “real-world” memory.
In addition to providing some new insights into how we process and remember complex experiences, we view our work as a stepping stone to a broader question space about how we communicate with other people, and how our brains acquire new information and knowledge. For example, we hypothesize that two people who preserve or distort some aspects of a shared experience in similar ways might be able to more effectively communicate about those events. We also hope to understand how the shapes of our memories change over time, preserving or distorting particular high- or low-level details. Our study suggests that high-level conceptual information is less prone to distortions across individuals (compared with low-level details). We hypothesize that the high-level conceptual properties of our experiences might also be more robust to forgetting or memory errors that increase with the time elapsed since the original experience.
Our memories shape how we think of ourselves and others, how we choose to act or interact, and how we grow our minds. Improving our understanding of how our ongoing experiences are processed, encoded into memories, and retrieved when needed is an essential part of understanding what makes us us.