Eye movements reveal spatiotemporal dynamics of visually-informed planning in navigation

  1. Seren Zhu  Is a corresponding author
  2. Kaushik Janakiraman Lakshminarasimhan
  3. Nastaran Arfaei
  4. Dora E Angelaki
  1. New York University, United States
  2. Columbia University, United States

Abstract

Goal-oriented navigation is widely understood to depend upon internal maps. Although this may be the case in many settings, humans tend to rely on vision in complex, unfamiliar environments. To study the nature of gaze during visually-guided navigation, we tasked humans to navigate to transiently visible goals in virtual mazes of varying levels of difficulty, observing that they took near-optimal trajectories in all arenas. By analyzing participants’ eye movements, we gained insights into how they performed visually-informed planning. The spatial distribution of gaze revealed that environmental complexity mediated a striking tradeoff in the extent to which attention was directed towards two complimentary aspects of the world model: the reward location and task-relevant transitions. The temporal evolution of gaze revealed rapid, sequential prospection of the future path, evocative of neural replay. These findings suggest that the spatiotemporal characteristics of gaze during navigation are significantly shaped by the unique cognitive computations underlying real-world, sequential decision making.

Data availability

Links to data and code are included in the manuscript.

The following data sets were generated

Article and author information

Author details

  1. Seren Zhu

    Center for Neural Science, New York University, New York, United States
    For correspondence
    lt1686@nyu.edu
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0003-0555-9690
  2. Kaushik Janakiraman Lakshminarasimhan

    Center for Theoretical Neuroscience, Columbia University, New York, United States
    Competing interests
    The authors declare that no competing interests exist.
  3. Nastaran Arfaei

    Department of Psychology, New York University, New York, United States
    Competing interests
    The authors declare that no competing interests exist.
  4. Dora E Angelaki

    Center for Neural Science, New York University, New York, United States
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-9650-8962

Funding

National Institutes of Health (U19-NS118246)

  • Seren Zhu
  • Nastaran Arfaei
  • Dora E Angelaki

National Institutes of Health (R01-EY022538)

  • Seren Zhu
  • Nastaran Arfaei
  • Dora E Angelaki

National Science Foundation (DBI-1707398)

  • Kaushik Janakiraman Lakshminarasimhan

Gatsby Charitable Foundation

  • Kaushik Janakiraman Lakshminarasimhan

The funders had no role in study design, data collection and interpretation, nor the decision to submit the work for publication.

Ethics

Human subjects: All experimental procedures were approved by the Institutional Review Board at New York University and all participants signed an informed consent form (IRB-FY2019-2599).

Copyright

© 2022, Zhu et al.

This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 3,245
    views
  • 571
    downloads
  • 20
    citations

Views, downloads and citations are aggregated across all versions of this paper published by eLife.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Seren Zhu
  2. Kaushik Janakiraman Lakshminarasimhan
  3. Nastaran Arfaei
  4. Dora E Angelaki
(2022)
Eye movements reveal spatiotemporal dynamics of visually-informed planning in navigation
eLife 11:e73097.
https://doi.org/10.7554/eLife.73097

Share this article

https://doi.org/10.7554/eLife.73097

Further reading

    1. Neuroscience
    Gergely F Turi, Sasa Teng ... Yueqing Peng
    Research Article

    Synchronous neuronal activity is organized into neuronal oscillations with various frequency and time domains across different brain areas and brain states. For example, hippocampal theta, gamma, and sharp wave oscillations are critical for memory formation and communication between hippocampal subareas and the cortex. In this study, we investigated the neuronal activity of the dentate gyrus (DG) with optical imaging tools during sleep-wake cycles in mice. We found that the activity of major glutamatergic cell populations in the DG is organized into infraslow oscillations (0.01–0.03 Hz) during NREM sleep. Although the DG is considered a sparsely active network during wakefulness, we found that 50% of granule cells and about 25% of mossy cells exhibit increased activity during NREM sleep, compared to that during wakefulness. Further experiments revealed that the infraslow oscillation in the DG was correlated with rhythmic serotonin release during sleep, which oscillates at the same frequency but in an opposite phase. Genetic manipulation of 5-HT receptors revealed that this neuromodulatory regulation is mediated by Htr1a receptors and the knockdown of these receptors leads to memory impairment. Together, our results provide novel mechanistic insights into how the 5-HT system can influence hippocampal activity patterns during sleep.

    1. Neuroscience
    Sven Ohl, Martin Rolfs
    Research Article

    Detecting causal relations structures our perception of events in the world. Here, we determined for visual interactions whether generalized (i.e. feature-invariant) or specialized (i.e. feature-selective) visual routines underlie the perception of causality. To this end, we applied a visual adaptation protocol to assess the adaptability of specific features in classical launching events of simple geometric shapes. We asked observers to report whether they observed a launch or a pass in ambiguous test events (i.e. the overlap between two discs varied from trial to trial). After prolonged exposure to causal launch events (the adaptor) defined by a particular set of features (i.e. a particular motion direction, motion speed, or feature conjunction), observers were less likely to see causal launches in subsequent ambiguous test events than before adaptation. Crucially, adaptation was contingent on the causal impression in launches as demonstrated by a lack of adaptation in non-causal control events. We assessed whether this negative aftereffect transfers to test events with a new set of feature values that were not presented during adaptation. Processing in specialized (as opposed to generalized) visual routines predicts that the transfer of visual adaptation depends on the feature similarity of the adaptor and the test event. We show that the negative aftereffects do not transfer to unadapted launch directions but do transfer to launch events of different speeds. Finally, we used colored discs to assign distinct feature-based identities to the launching and the launched stimulus. We found that the adaptation transferred across colors if the test event had the same motion direction as the adaptor. In summary, visual adaptation allowed us to carve out a visual feature space underlying the perception of causality and revealed specialized visual routines that are tuned to a launch’s motion direction.