Receptive field center-surround interactions mediate context-dependent spatial contrast encoding in the retina
Abstract
Antagonistic receptive field surrounds are a near-universal property of early sensory processing. A key assumption in many models for retinal ganglion cell encoding is that receptive field surrounds are added only to the fully formed center signal. But anatomical and functional observations indicate that surrounds are added before the summation of signals across receptive field subunits that creates the center. Here, we show that in the macaque monkey retina this receptive field architecture has an important consequence for spatial contrast encoding: the surround can control sensitivity to fine spatial structure by changing the way the center integrates visual information over space. The impact of the surround is particularly prominent when center and surround signals are correlated, as they are in natural stimuli. This effect of the surround differs substantially from classic center-surround models and raises the possibility that the surround plays unappreciated roles in shaping ganglion cell sensitivity to natural inputs.
Data availability
We have made all the data in the study freely available. Source data files have been provided for Figures 2, 3, 4 and 7, and example code to demonstrate how to pull out and plot the data is provided as Source code file 1.
Article and author information
Author details
Funding
National Eye Institute (F31-EY026288)
- Maxwell H Turner
National Eye Institute (EY11850)
- Fred Rieke
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Ethics
Animal experimentation: Tissue was obtained via the tissue distribution program at the Washington National Primate Research Center. All animal procedures were performed in accordance with IACUC protocols at the University of Washington (IACUC protocol number 4277-01).
Copyright
© 2018, Turner et al.
This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.
Metrics
-
- 8,040
- views
-
- 594
- downloads
-
- 58
- citations
Views, downloads and citations are aggregated across all versions of this paper published by eLife.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading
-
- Neuroscience
Detecting causal relations structures our perception of events in the world. Here, we determined for visual interactions whether generalized (i.e. feature-invariant) or specialized (i.e. feature-selective) visual routines underlie the perception of causality. To this end, we applied a visual adaptation protocol to assess the adaptability of specific features in classical launching events of simple geometric shapes. We asked observers to report whether they observed a launch or a pass in ambiguous test events (i.e. the overlap between two discs varied from trial to trial). After prolonged exposure to causal launch events (the adaptor) defined by a particular set of features (i.e. a particular motion direction, motion speed, or feature conjunction), observers were less likely to see causal launches in subsequent ambiguous test events than before adaptation. Crucially, adaptation was contingent on the causal impression in launches as demonstrated by a lack of adaptation in non-causal control events. We assessed whether this negative aftereffect transfers to test events with a new set of feature values that were not presented during adaptation. Processing in specialized (as opposed to generalized) visual routines predicts that the transfer of visual adaptation depends on the feature similarity of the adaptor and the test event. We show that the negative aftereffects do not transfer to unadapted launch directions but do transfer to launch events of different speeds. Finally, we used colored discs to assign distinct feature-based identities to the launching and the launched stimulus. We found that the adaptation transferred across colors if the test event had the same motion direction as the adaptor. In summary, visual adaptation allowed us to carve out a visual feature space underlying the perception of causality and revealed specialized visual routines that are tuned to a launch’s motion direction.