Cognitive Neuroscience: Memorable first impressions

Our ability to recall details from a remembered image depends on a single mechanism that is engaged from the very moment the image disappears from view.
  1. Emilio Salinas  Is a corresponding author
  2. Bashirul I Sheikh
  1. Department of Translational Neuroscience, Wake Forest University School of Medicine, United States
  2. Neuroscience Graduate Program, Wake Forest University School of Medicine, United States

Look out the window and see what stands out. Perhaps you notice some red and pink azaleas in full bloom. Now close your eyes and picture that scene in your mind. Initially, the colors and silhouettes linger vividly, but the details wither rapidly, leaving only a faded version of the image. As time passes, the accuracy with which an image can be recalled drops abruptly.

Memory is a critical, wonderful, multifaceted mental capacity that relies on many structures and mechanisms throughout the brain (Baddeley, 2003; Squire and Wixted, 2011; Schacter et al., 2012). This is not surprising, given the diversity of timescales and data types – such as images, words, facts and motor skills – that we can remember. Studies have shown that our visual memories are strongest immediately after an image disappears, remaining reliable for about half a second. This has traditionally been attributed to ‘iconic’ memory, which is thought to rely on a direct readout of stimulus-driven activity in visual circuits in the brain. In this case, the memory remains vivid because, after the stimulus (i.e., the image) has been removed, the visual activity takes some time to decay (Sperling, 1960; Pratte, 2018; Teeuwen et al., 2021).

In contrast, recalling an image a second or so after it has disappeared engages a different type of memory – visual working memory – that relies on information stored in different circuits in the frontal lobe (Pasternak and Greenlee, 2005; D’Esposito and Postle, 2015). Although not as vivid, the stored image remains stable for much longer. This is because, despite being more robust, the storage capacity for visual working memory is more limited: fewer items and less detail can be recalled from a remembered image. Together, these findings led to the idea that there are two distinct short-term memory mechanisms. Now, in eLife, Ivan Tomić and Paul Bays report strong evidence indicating that iconic memory and visual working memory are part of the same recall mechanism (Tomić and Bays, 2023).

Tomić and Bays – who are based at the University of Zagreb and the University of Cambridge – first constructed a detailed computational model to describe how sensory information is passed to a visual working memory circuit for storage and later recall (Figure 1). In this model, visual neurons respond to the presentation of an image containing a few items. This stimulus causes sensory activity to rise smoothly while the input lasts, and to decay once the stimulus ceases, consistent with previous experiments (Teeuwen et al., 2021). This sensory response then drives a population of visual working memory neurons that can sustain their activity in the absence of a stimulus, although this activity will eventually be corrupted due to noise (Wimmer et al., 2014; DePasquale et al., 2018). An important feature of the model is that each remembered item is allocated an equal fraction of the maximum possible working memory activity.

Timeline of events during stimulus presentation and storage.

A visual stimulus (grey box containing pattern) with N items is presented for a period of time (pale blue region). Sensory activity increases to a maximum value during this period, and then decays when the stimulus disappears. For each item, VWM activity also increases towards an effective saturation limit, which is the maximum possible value divided by the number of items presented: here N=2, so the effective saturation limit is half the maximum possible value. When the target item is cued (black arrow; top) at a later time (yellow region), the non-target item(s) are removed from memory (grey trace), and activity associated with the target item (green trace) increases towards the maximum possible value. The level of activity (and hence the accuracy of memory recall) will vary more and more over time due to noise. VWM: visual working memory.

Image credit: Adapted from Figure 2 in the paper by Tomić and Bays, 2023.

The model constructed by Tomić and Bays can make specific testable predictions. For example, it predicts that if an item is cued for later recall while the sensory signal is still present, the working memory activity associated with the non-targets will decay rapidly, freeing up resources and thus increasing the working memory activity associated with the cued item. This leads to more accurate recall of the item. In contrast, if an item is cued for later recall once the sensory signal has approached zero, this ‘boost’ does not happen, and the item is not recalled as accurately. In addition, the working memory activity should increase with longer exposure to the stimulus and should decrease as the number of remembered items increases.

These predictions were confirmed through experiments with humans. Participants were shown visual stimuli while several factors were varied, including the number of items to be remembered, the duration of the stimulus, the time at which the item to be recalled was identified, and the time of the actual recall. The results of these experiments are consistent with the notion that, during recall, visual information is always read out from the same population of neurons.

The findings of Tomić and Bays are satisfying for their simplicity; what seemed to require two separate mechanisms is explained by a single framework aligned with many prior studies. However, models always require simplifications and shortcuts. For instance, much evidence indicates that both frontal lobe circuits and sensory areas contribute to the self-sustained maintenance of activity that underlies the short-term memory of sensory events (Pasternak and Greenlee, 2005). Therefore, visual working memory is likely the result of continuous recurrent dynamics across areas (DePasquale et al., 2018; Stroud et al., 2024). Furthermore, there is still debate about the degree to which visual working memory implies equal sharing of resources, as opposed to some items receiving larger or smaller shares (Ma et al., 2014; Pratte, 2018). Nevertheless, the proposed model is certainly an important advance that future studies can build upon.

References

Article and author information

Author details

  1. Emilio Salinas

    Emilio Salinas is in the Department of Translational Neuroscience, Wake Forest University School of Medicine, Winston-Salem, United States

    For correspondence
    esalinas@wakehealth.edu
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0001-7411-5693
  2. Bashirul I Sheikh

    Bashirul I Sheikh is in the Neuroscience Graduate Program, Wake Forest University School of Medicine, Winston-Salem, United States

    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-3987-3891

Publication history

  1. Version of Record published:

Copyright

© 2024, Salinas and Sheikh

This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 598
    views
  • 47
    downloads
  • 0
    citations

Views, downloads and citations are aggregated across all versions of this paper published by eLife.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Emilio Salinas
  2. Bashirul I Sheikh
(2024)
Cognitive Neuroscience: Memorable first impressions
eLife 13:e98274.
https://doi.org/10.7554/eLife.98274

Further reading

    1. Neuroscience
    Raven Star Wallace, Bronte Mckeown ... Jonathan Smallwood
    Research Article

    Movie-watching is a central aspect of our lives and an important paradigm for understanding the brain mechanisms behind cognition as it occurs in daily life. Contemporary views of ongoing thought argue that the ability to make sense of events in the ‘here and now’ depend on the neural processing of incoming sensory information by auditory and visual cortex, which are kept in check by systems in association cortex. However, we currently lack an understanding of how patterns of ongoing thoughts map onto the different brain systems when we watch a film, partly because methods of sampling experience disrupt the dynamics of brain activity and the experience of movie-watching. Our study established a novel method for mapping thought patterns onto the brain activity that occurs at different moments of a film, which does not disrupt the time course of brain activity or the movie-watching experience. We found moments when experience sampling highlighted engagement with multi-sensory features of the film or highlighted thoughts with episodic features, regions of sensory cortex were more active and subsequent memory for events in the movie was better—on the other hand, periods of intrusive distraction emerged when activity in regions of association cortex within the frontoparietal system was reduced. These results highlight the critical role sensory systems play in the multi-modal experience of movie-watching and provide evidence for the role of association cortex in reducing distraction when we watch films.

    1. Neuroscience
    Gyeong Hee Pyeon, Hyewon Cho ... Yong Sang Jo
    Research Article

    Recent studies suggest that calcitonin gene-related peptide (CGRP) neurons in the parabrachial nucleus (PBN) represent aversive information and signal a general alarm to the forebrain. If CGRP neurons serve as a true general alarm, their activation would modulate both passive nad active defensive behaviors depending on the magnitude and context of the threat. However, most prior research has focused on the role of CGRP neurons in passive freezing responses, with limited exploration of their involvement in active defensive behaviors. To address this, we examined the role of CGRP neurons in active defensive behavior using a predator-like robot programmed to chase mice. Our electrophysiological results revealed that CGRP neurons encode the intensity of aversive stimuli through variations in firing durations and amplitudes. Optogenetic activation of CGRP neuron during robot chasing elevated flight responses in both conditioning and retention tests, presumably by amyplifying the perception of the threat as more imminent and dangerous. In contrast, animals with inactivated CGRP neurons exhibited reduced flight responses, even when the robot was programmed to appear highly threatening during conditioning. These findings expand the understanding of CGRP neurons in the PBN as a critical alarm system, capable of dynamically regulating active defensive behaviors by amplifying threat perception, ensuring adaptive responses to varying levels of danger.