Cognitive Neuroscience: Memorable first impressions

Our ability to recall details from a remembered image depends on a single mechanism that is engaged from the very moment the image disappears from view.
  1. Emilio Salinas  Is a corresponding author
  2. Bashirul I Sheikh
  1. Department of Translational Neuroscience, Wake Forest University School of Medicine, United States
  2. Neuroscience Graduate Program, Wake Forest University School of Medicine, United States

Look out the window and see what stands out. Perhaps you notice some red and pink azaleas in full bloom. Now close your eyes and picture that scene in your mind. Initially, the colors and silhouettes linger vividly, but the details wither rapidly, leaving only a faded version of the image. As time passes, the accuracy with which an image can be recalled drops abruptly.

Memory is a critical, wonderful, multifaceted mental capacity that relies on many structures and mechanisms throughout the brain (Baddeley, 2003; Squire and Wixted, 2011; Schacter et al., 2012). This is not surprising, given the diversity of timescales and data types – such as images, words, facts and motor skills – that we can remember. Studies have shown that our visual memories are strongest immediately after an image disappears, remaining reliable for about half a second. This has traditionally been attributed to ‘iconic’ memory, which is thought to rely on a direct readout of stimulus-driven activity in visual circuits in the brain. In this case, the memory remains vivid because, after the stimulus (i.e., the image) has been removed, the visual activity takes some time to decay (Sperling, 1960; Pratte, 2018; Teeuwen et al., 2021).

In contrast, recalling an image a second or so after it has disappeared engages a different type of memory – visual working memory – that relies on information stored in different circuits in the frontal lobe (Pasternak and Greenlee, 2005; D’Esposito and Postle, 2015). Although not as vivid, the stored image remains stable for much longer. This is because, despite being more robust, the storage capacity for visual working memory is more limited: fewer items and less detail can be recalled from a remembered image. Together, these findings led to the idea that there are two distinct short-term memory mechanisms. Now, in eLife, Ivan Tomić and Paul Bays report strong evidence indicating that iconic memory and visual working memory are part of the same recall mechanism (Tomić and Bays, 2023).

Tomić and Bays – who are based at the University of Zagreb and the University of Cambridge – first constructed a detailed computational model to describe how sensory information is passed to a visual working memory circuit for storage and later recall (Figure 1). In this model, visual neurons respond to the presentation of an image containing a few items. This stimulus causes sensory activity to rise smoothly while the input lasts, and to decay once the stimulus ceases, consistent with previous experiments (Teeuwen et al., 2021). This sensory response then drives a population of visual working memory neurons that can sustain their activity in the absence of a stimulus, although this activity will eventually be corrupted due to noise (Wimmer et al., 2014; DePasquale et al., 2018). An important feature of the model is that each remembered item is allocated an equal fraction of the maximum possible working memory activity.

Timeline of events during stimulus presentation and storage.

A visual stimulus (grey box containing pattern) with N items is presented for a period of time (pale blue region). Sensory activity increases to a maximum value during this period, and then decays when the stimulus disappears. For each item, VWM activity also increases towards an effective saturation limit, which is the maximum possible value divided by the number of items presented: here N=2, so the effective saturation limit is half the maximum possible value. When the target item is cued (black arrow; top) at a later time (yellow region), the non-target item(s) are removed from memory (grey trace), and activity associated with the target item (green trace) increases towards the maximum possible value. The level of activity (and hence the accuracy of memory recall) will vary more and more over time due to noise. VWM: visual working memory.

Image credit: Adapted from Figure 2 in the paper by Tomić and Bays, 2023.

The model constructed by Tomić and Bays can make specific testable predictions. For example, it predicts that if an item is cued for later recall while the sensory signal is still present, the working memory activity associated with the non-targets will decay rapidly, freeing up resources and thus increasing the working memory activity associated with the cued item. This leads to more accurate recall of the item. In contrast, if an item is cued for later recall once the sensory signal has approached zero, this ‘boost’ does not happen, and the item is not recalled as accurately. In addition, the working memory activity should increase with longer exposure to the stimulus and should decrease as the number of remembered items increases.

These predictions were confirmed through experiments with humans. Participants were shown visual stimuli while several factors were varied, including the number of items to be remembered, the duration of the stimulus, the time at which the item to be recalled was identified, and the time of the actual recall. The results of these experiments are consistent with the notion that, during recall, visual information is always read out from the same population of neurons.

The findings of Tomić and Bays are satisfying for their simplicity; what seemed to require two separate mechanisms is explained by a single framework aligned with many prior studies. However, models always require simplifications and shortcuts. For instance, much evidence indicates that both frontal lobe circuits and sensory areas contribute to the self-sustained maintenance of activity that underlies the short-term memory of sensory events (Pasternak and Greenlee, 2005). Therefore, visual working memory is likely the result of continuous recurrent dynamics across areas (DePasquale et al., 2018; Stroud et al., 2024). Furthermore, there is still debate about the degree to which visual working memory implies equal sharing of resources, as opposed to some items receiving larger or smaller shares (Ma et al., 2014; Pratte, 2018). Nevertheless, the proposed model is certainly an important advance that future studies can build upon.

References

Article and author information

Author details

  1. Emilio Salinas

    Emilio Salinas is in the Department of Translational Neuroscience, Wake Forest University School of Medicine, Winston-Salem, United States

    For correspondence
    esalinas@wakehealth.edu
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0001-7411-5693
  2. Bashirul I Sheikh

    Bashirul I Sheikh is in the Neuroscience Graduate Program, Wake Forest University School of Medicine, Winston-Salem, United States

    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-3987-3891

Publication history

  1. Version of Record published:

Copyright

© 2024, Salinas and Sheikh

This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 621
    views
  • 48
    downloads
  • 0
    citations

Views, downloads and citations are aggregated across all versions of this paper published by eLife.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Emilio Salinas
  2. Bashirul I Sheikh
(2024)
Cognitive Neuroscience: Memorable first impressions
eLife 13:e98274.
https://doi.org/10.7554/eLife.98274

Further reading

    1. Neuroscience
    Christian Thome, Jan Maximilian Janssen ... Maren Engelhardt
    Tools and Resources

    The axon initial segment (AIS) constitutes not only the site of action potential initiation, but also a hub for activity-dependent modulation of output generation. Recent studies shedding light on AIS function used predominantly post-hoc approaches since no robust murine in vivo live reporters exist. Here, we introduce a reporter line in which the AIS is intrinsically labeled by an ankyrin-G-GFP fusion protein activated by Cre recombinase, tagging the native Ank3 gene. Using confocal, superresolution, and two-photon microscopy as well as whole-cell patch-clamp recordings in vitro, ex vivo, and in vivo, we confirm that the subcellular scaffold of the AIS and electrophysiological parameters of labeled cells remain unchanged. We further uncover rapid AIS remodeling following increased network activity in this model system, as well as highly reproducible in vivo labeling of AIS over weeks. This novel reporter line allows longitudinal studies of AIS modulation and plasticity in vivo in real-time and thus provides a unique approach to study subcellular plasticity in a broad range of applications.

    1. Neuroscience
    Sean M Perkins, Elom A Amematsro ... Mark M Churchland
    Research Article

    Decoders for brain-computer interfaces (BCIs) assume constraints on neural activity, chosen to reflect scientific beliefs while yielding tractable computations. Recent scientific advances suggest that the true constraints on neural activity, especially its geometry, may be quite different from those assumed by most decoders. We designed a decoder, MINT, to embrace statistical constraints that are potentially more appropriate. If those constraints are accurate, MINT should outperform standard methods that explicitly make different assumptions. Additionally, MINT should be competitive with expressive machine learning methods that can implicitly learn constraints from data. MINT performed well across tasks, suggesting its assumptions are well-matched to the data. MINT outperformed other interpretable methods in every comparison we made. MINT outperformed expressive machine learning methods in 37 of 42 comparisons. MINT’s computations are simple, scale favorably with increasing neuron counts, and yield interpretable quantities such as data likelihoods. MINT’s performance and simplicity suggest it may be a strong candidate for many BCI applications.