Behavioral-state modulation of inhibition is context-dependent and cell type specific in mouse visual cortex
Abstract
Cortical responses to sensory stimuli are modulated by behavioral state. In the primary visual cortex (V1), visual responses of pyramidal neurons increase during locomotion. This response gain was suggested to be mediated through inhibitory neurons, resulting in the disinhibition of pyramidal neurons. Using in vivo two-photon calcium imaging in layers 2/3 and 4 in mouse V1, we reveal that locomotion increases the activity of vasoactive intestinal peptide (VIP), somatostatin (SST) and parvalbumin (PV)-positive interneurons during visual stimulation, challenging the disinhibition model. In darkness, while most VIP and PV neurons remained locomotion responsive, SST and excitatory neurons were largely non-responsive. Context-dependent locomotion responses were found in each cell type, with the highest proportion among SST neurons. These findings establish that modulation of neuronal activity by locomotion is context-dependent and contest the generality of a disinhibitory circuit for gain control of sensory responses by behavioral state.
Article and author information
Author details
Funding
Wellcome (102857/Z/13/Z)
- Nathalie LI Rochefort
EuroSpin Erasmus Mundus Program
- Sander W Keemink
Royal Society (102857/Z/13/Z)
- Nathalie LI Rochefort
European Commission (Marie Curie Actions (FP7), MC-CIG 631770)
- Nathalie LI Rochefort
Patrick Wild Centre
- Nathalie LI Rochefort
The Shirley Foundation
- Nathalie LI Rochefort
RS MacDonald Charitable Trust (Seedcorn Grant 21)
- Nathalie LI Rochefort
University Of Edinburgh (Graduate School of Life Sciences)
- Evelyn Dylda
European Commission (Marie Curie Actions (FP7), IEF 624461)
- Janelle MP Pakan
EPSRC Doctoral Training Centre in Neuroinformatics (EP/F500385/1 and BB/F529254/1)
- Sander W Keemink
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Ethics
Animal experimentation: All procedures were approved by the University of Edinburgh animal welfare committee, and were performed under a UK Home Office Project License (PPL No. 60/4570).
Copyright
© 2016, Pakan et al.
This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.
Metrics
-
- 8,943
- views
-
- 1,830
- downloads
-
- 236
- citations
Views, downloads and citations are aggregated across all versions of this paper published by eLife.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading
-
- Neuroscience
Detecting causal relations structures our perception of events in the world. Here, we determined for visual interactions whether generalized (i.e. feature-invariant) or specialized (i.e. feature-selective) visual routines underlie the perception of causality. To this end, we applied a visual adaptation protocol to assess the adaptability of specific features in classical launching events of simple geometric shapes. We asked observers to report whether they observed a launch or a pass in ambiguous test events (i.e. the overlap between two discs varied from trial to trial). After prolonged exposure to causal launch events (the adaptor) defined by a particular set of features (i.e. a particular motion direction, motion speed, or feature conjunction), observers were less likely to see causal launches in subsequent ambiguous test events than before adaptation. Crucially, adaptation was contingent on the causal impression in launches as demonstrated by a lack of adaptation in non-causal control events. We assessed whether this negative aftereffect transfers to test events with a new set of feature values that were not presented during adaptation. Processing in specialized (as opposed to generalized) visual routines predicts that the transfer of visual adaptation depends on the feature similarity of the adaptor and the test event. We show that the negative aftereffects do not transfer to unadapted launch directions but do transfer to launch events of different speeds. Finally, we used colored discs to assign distinct feature-based identities to the launching and the launched stimulus. We found that the adaptation transferred across colors if the test event had the same motion direction as the adaptor. In summary, visual adaptation allowed us to carve out a visual feature space underlying the perception of causality and revealed specialized visual routines that are tuned to a launch’s motion direction.