Surface color and predictability determine contextual modulation of V1 firing and gamma oscillations
Abstract
The integration of direct bottom-up inputs with contextual information is a core feature of neocortical circuits. In area V1, neurons may reduce their firing rates when their receptive field input can be predicted by spatial context. Gamma-synchronized (30-80Hz) firing may provide a complementary signal to rates, reflecting stronger synchronization between neuronal populations receiving mutually predictable inputs. We show that large uniform surfaces, which have high spatial predictability, strongly suppressed firing yet induced prominent gamma-synchronization in macaque V1, particularly when they were colored. Yet, chromatic mismatches between center and surround, breaking predictability, strongly reduced gamma-synchronization while increasing firing rates. Differences between responses to different colors, including strong gamma-responses to red, arose from stimulus adaptation to a full-screen background, suggesting prominent differences in adaptation between M- and L-cone signaling pathways. Thus, synchrony signaled whether RF inputs were predicted from spatial context, while firing rates increased when stimuli were unpredicted from context.
Data availability
As described in the Methods section, several datasets were acquired in this study. Datasets have been uploaded onto Dryad (https://doi.org/10.5061/dryad.4809qj4). Thse include individual sessions with each session preprocessed (downsampled, see Methods), epoched into trials with time, channel and condition information.
-
Data from: Surface color and predictability determine contextual modulation of macaque V1 firing and gamma oscillationsDryad Digital Repository, doi:10.5061/dryad.4809qj4.
Article and author information
Author details
Funding
Deutsche Forschungsgemeinschaft (SPP 1665)
- Pascal Fries
European Commission (FP7-604102-HBP)
- Pascal Fries
Deutsche Forschungsgemeinschaft (FOR 1847)
- Pascal Fries
Deutsche Forschungsgemeinschaft (FR2557/5-1-CORNET)
- Pascal Fries
Deutsche Forschungsgemeinschaft (FR2557/6-1-NeuroTMR)
- Pascal Fries
Deutsche Forschungsgemeinschaft (Reinhart Kosselleck grant)
- Wolf Singer
National Institutes of Health (1U54MH091657-WU-Minn- Consortium-HCP)
- Pascal Fries
European Science Foundation (European Young Investigator Award)
- Pascal Fries
LOEWE (NeFF)
- Pascal Fries
European Commission (HEALTH-F2-2008-200728-BrainSynch)
- Pascal Fries
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Ethics
Animal experimentation: All procedures complied with the German and European regulations for the protection of animals and were approved by the regional authority (Regierungspräsidium Darmstadt, F-149-1000/1005). All surgeries were performed under anesthesia and were followed by analgesic treatment post-operatively.
Copyright
© 2019, Peter et al.
This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.
Metrics
-
- 2,753
- views
-
- 453
- downloads
-
- 81
- citations
Views, downloads and citations are aggregated across all versions of this paper published by eLife.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading
-
- Neuroscience
Our movements result in predictable sensory feedback that is often multimodal. Based on deviations between predictions and actual sensory input, primary sensory areas of cortex have been shown to compute sensorimotor prediction errors. How prediction errors in one sensory modality influence the computation of prediction errors in another modality is still unclear. To investigate multimodal prediction errors in mouse auditory cortex, we used a virtual environment to experimentally couple running to both self-generated auditory and visual feedback. Using two-photon microscopy, we first characterized responses of layer 2/3 (L2/3) neurons to sounds, visual stimuli, and running onsets and found responses to all three stimuli. Probing responses evoked by audiomotor (AM) mismatches, we found that they closely resemble visuomotor (VM) mismatch responses in visual cortex (V1). Finally, testing for cross modal influence on AM mismatch responses by coupling both sound amplitude and visual flow speed to the speed of running, we found that AM mismatch responses were amplified when paired with concurrent VM mismatches. Our results demonstrate that multimodal and non-hierarchical interactions shape prediction error responses in cortical L2/3.
-
- Neuroscience
Recognizing goal-directed actions is a computationally challenging task, requiring not only the visual analysis of body movements, but also analysis of how these movements causally impact, and thereby induce a change in, those objects targeted by an action. We tested the hypothesis that the analysis of body movements and the effects they induce relies on distinct neural representations in superior and anterior inferior parietal lobe (SPL and aIPL). In four fMRI sessions, participants observed videos of actions (e.g. breaking stick, squashing plastic bottle) along with corresponding point-light-display (PLD) stick figures, pantomimes, and abstract animations of agent–object interactions (e.g. dividing or compressing a circle). Cross-decoding between actions and animations revealed that aIPL encodes abstract representations of action effect structures independent of motion and object identity. By contrast, cross-decoding between actions and PLDs revealed that SPL is disproportionally tuned to body movements independent of visible interactions with objects. Lateral occipitotemporal cortex (LOTC) was sensitive to both action effects and body movements. These results demonstrate that parietal cortex and LOTC are tuned to physical action features, such as how body parts move in space relative to each other and how body parts interact with objects to induce a change (e.g. in position or shape/configuration). The high level of abstraction revealed by cross-decoding suggests a general neural code supporting mechanical reasoning about how entities interact with, and have effects on, each other.