Cognitive Neuroscience: Memorable first impressions
Look out the window and see what stands out. Perhaps you notice some red and pink azaleas in full bloom. Now close your eyes and picture that scene in your mind. Initially, the colors and silhouettes linger vividly, but the details wither rapidly, leaving only a faded version of the image. As time passes, the accuracy with which an image can be recalled drops abruptly.
Memory is a critical, wonderful, multifaceted mental capacity that relies on many structures and mechanisms throughout the brain (Baddeley, 2003; Squire and Wixted, 2011; Schacter et al., 2012). This is not surprising, given the diversity of timescales and data types – such as images, words, facts and motor skills – that we can remember. Studies have shown that our visual memories are strongest immediately after an image disappears, remaining reliable for about half a second. This has traditionally been attributed to ‘iconic’ memory, which is thought to rely on a direct readout of stimulus-driven activity in visual circuits in the brain. In this case, the memory remains vivid because, after the stimulus (i.e., the image) has been removed, the visual activity takes some time to decay (Sperling, 1960; Pratte, 2018; Teeuwen et al., 2021).
In contrast, recalling an image a second or so after it has disappeared engages a different type of memory – visual working memory – that relies on information stored in different circuits in the frontal lobe (Pasternak and Greenlee, 2005; D’Esposito and Postle, 2015). Although not as vivid, the stored image remains stable for much longer. This is because, despite being more robust, the storage capacity for visual working memory is more limited: fewer items and less detail can be recalled from a remembered image. Together, these findings led to the idea that there are two distinct short-term memory mechanisms. Now, in eLife, Ivan Tomić and Paul Bays report strong evidence indicating that iconic memory and visual working memory are part of the same recall mechanism (Tomić and Bays, 2023).
Tomić and Bays – who are based at the University of Zagreb and the University of Cambridge – first constructed a detailed computational model to describe how sensory information is passed to a visual working memory circuit for storage and later recall (Figure 1). In this model, visual neurons respond to the presentation of an image containing a few items. This stimulus causes sensory activity to rise smoothly while the input lasts, and to decay once the stimulus ceases, consistent with previous experiments (Teeuwen et al., 2021). This sensory response then drives a population of visual working memory neurons that can sustain their activity in the absence of a stimulus, although this activity will eventually be corrupted due to noise (Wimmer et al., 2014; DePasquale et al., 2018). An important feature of the model is that each remembered item is allocated an equal fraction of the maximum possible working memory activity.
The model constructed by Tomić and Bays can make specific testable predictions. For example, it predicts that if an item is cued for later recall while the sensory signal is still present, the working memory activity associated with the non-targets will decay rapidly, freeing up resources and thus increasing the working memory activity associated with the cued item. This leads to more accurate recall of the item. In contrast, if an item is cued for later recall once the sensory signal has approached zero, this ‘boost’ does not happen, and the item is not recalled as accurately. In addition, the working memory activity should increase with longer exposure to the stimulus and should decrease as the number of remembered items increases.
These predictions were confirmed through experiments with humans. Participants were shown visual stimuli while several factors were varied, including the number of items to be remembered, the duration of the stimulus, the time at which the item to be recalled was identified, and the time of the actual recall. The results of these experiments are consistent with the notion that, during recall, visual information is always read out from the same population of neurons.
The findings of Tomić and Bays are satisfying for their simplicity; what seemed to require two separate mechanisms is explained by a single framework aligned with many prior studies. However, models always require simplifications and shortcuts. For instance, much evidence indicates that both frontal lobe circuits and sensory areas contribute to the self-sustained maintenance of activity that underlies the short-term memory of sensory events (Pasternak and Greenlee, 2005). Therefore, visual working memory is likely the result of continuous recurrent dynamics across areas (DePasquale et al., 2018; Stroud et al., 2024). Furthermore, there is still debate about the degree to which visual working memory implies equal sharing of resources, as opposed to some items receiving larger or smaller shares (Ma et al., 2014; Pratte, 2018). Nevertheless, the proposed model is certainly an important advance that future studies can build upon.
References
-
Working memory: looking back and looking forwardNature Reviews Neuroscience 4:829–839.https://doi.org/10.1038/nrn1201
-
The cognitive neuroscience of working memoryAnnual Review of Psychology 66:115–142.https://doi.org/10.1146/annurev-psych-010814-015031
-
Working memory in primate sensory systemsNature Reviews Neuroscience 6:97–107.https://doi.org/10.1038/nrn1603
-
Iconic memories die a sudden deathPsychological Science 29:877–887.https://doi.org/10.1177/0956797617747118
-
The information available in brief visual presentationsPsychological Monographs 74:1–29.https://doi.org/10.1037/h0093759
-
The cognitive neuroscience of human memory since H.MAnnual Review of Neuroscience 34:259–288.https://doi.org/10.1146/annurev-neuro-061010-113720
-
The computational foundations of dynamic coding in working memoryTrends in Cognitive Sciences 4:S1364-6613(24)00053-6.https://doi.org/10.1016/j.tics.2024.02.011
-
A neuronal basis of iconic memory in macaque primary visual cortexCurrent Biology 31:5401–5414.https://doi.org/10.1016/j.cub.2021.09.052
Article and author information
Author details
Publication history
Copyright
© 2024, Salinas and Sheikh
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Metrics
-
- 621
- views
-
- 48
- downloads
-
- 0
- citations
Views, downloads and citations are aggregated across all versions of this paper published by eLife.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading
-
- Neuroscience
The axon initial segment (AIS) constitutes not only the site of action potential initiation, but also a hub for activity-dependent modulation of output generation. Recent studies shedding light on AIS function used predominantly post-hoc approaches since no robust murine in vivo live reporters exist. Here, we introduce a reporter line in which the AIS is intrinsically labeled by an ankyrin-G-GFP fusion protein activated by Cre recombinase, tagging the native Ank3 gene. Using confocal, superresolution, and two-photon microscopy as well as whole-cell patch-clamp recordings in vitro, ex vivo, and in vivo, we confirm that the subcellular scaffold of the AIS and electrophysiological parameters of labeled cells remain unchanged. We further uncover rapid AIS remodeling following increased network activity in this model system, as well as highly reproducible in vivo labeling of AIS over weeks. This novel reporter line allows longitudinal studies of AIS modulation and plasticity in vivo in real-time and thus provides a unique approach to study subcellular plasticity in a broad range of applications.
-
- Neuroscience
Decoders for brain-computer interfaces (BCIs) assume constraints on neural activity, chosen to reflect scientific beliefs while yielding tractable computations. Recent scientific advances suggest that the true constraints on neural activity, especially its geometry, may be quite different from those assumed by most decoders. We designed a decoder, MINT, to embrace statistical constraints that are potentially more appropriate. If those constraints are accurate, MINT should outperform standard methods that explicitly make different assumptions. Additionally, MINT should be competitive with expressive machine learning methods that can implicitly learn constraints from data. MINT performed well across tasks, suggesting its assumptions are well-matched to the data. MINT outperformed other interpretable methods in every comparison we made. MINT outperformed expressive machine learning methods in 37 of 42 comparisons. MINT’s computations are simple, scale favorably with increasing neuron counts, and yield interpretable quantities such as data likelihoods. MINT’s performance and simplicity suggest it may be a strong candidate for many BCI applications.