Computational Neuroscience: Building a mathematical model of the brain
The amount of data that can be gathered about the human brain has been growing exponentially in recent years, but it could be argued that relatively little progress has been made in actually understanding how the brain works. While there may be sociological and philosophical reasons for this lack of progress (see, for example, Thompson, 2021), a main reason is the low level of interactions between the experimental and theoretical/modeling communities in neuroscience (Marder, 2015). Bridging this divide will be difficult because it requires researchers on both sides to leave their comfort zones and learn more about each other’s work, including the constraints that both sides work under. If not, there is a risk that the results of beautiful experiments, or the outputs of thoughtful models, will not be fully appreciated by everyone working in that particular field of neuroscience.
Where does one begin when trying to build a mathematical model of a biological system? In the case of the brain, besides deciding which region of the brain one wants to model and being clear about the goals of the study (Shou et al., 2015), choices need to be made about the level of abstraction. Understanding how the brain works, in both health and disease, requires studying neural circuits at the level of the cell, particularly as neurological diseases are cell-specific (see, for example, Gallo et al., 2020). Furthermore, many studies have made it abundantly clear that circuit function cannot be understood without a greater understanding of the individual cell types making up the circuit (see, for example, Daur et al., 2016 regarding the stomatogastric nervous system). Indeed, when considering a theoretical basis for biology, it is often argued that the correct level of abstraction is the cell (Brenner, 2010).
Hippocampome.org is a database that contains a vast amount of information about the different types of neuronal cells found in the hippocampus – a region of the brain that has major roles in learning and memory – in rodents. The first version of the database contained information on 122 types of neuronal cells based on the shapes of their axons and dendrites, their main neurotransmitters, and various molecular and biophysical properties (Wheeler et al., 2015). Subsequent versions of the database included information on a range of topics including the physiology of the synapses that connect neurons and the electrical behaviour of various neurons.
Now, in eLife, Giorgio Ascoli and colleagues at George Mason University – including Diek Wheeler as first author – present Hippocampome.org v2.0, which enables users to automatically build models that can be used to simulate the electrical behaviour of networks of neurons (Wheeler et al., 2024). Moreover, Hippocampome.org v2.0 includes data and information on over 50 new neuron types. Now, with the click of a button, a user can choose the region (or regions) of the hippocampus they are interested in and the cell types they would like to include in their model, and Hippocampome.org v2.0 will build a model in which the properties of the individual cells and their connections are based on experimental data from multiple research papers. Furthermore, the data come with important metadata (such as the age of the animals), so users can evaluate the values of the various parameters that are included in any model. Indeed, the richness of the data is such that some researchers have been able to make discoveries by applying data-mining techniques to Hippocampome.org (Sanchez-Aguilera et al., 2021).
Deciding how much detail to include in a model is a non-trivial consideration, but it is naturally dependent on the question being asked and the availability of experimental data. Choosing to represent each neuron by a single compartment, rather than including its structure and properties, and using a relatively simple mathematical model called an Izhikevich model (Izhikevich, 2003) to describe the spiking process is both sensible and necessary. Izhikevich models can encompass many, if not all, of the firing properties of biological cells, and although more complex neuron models exist – such as conductance-based models that include ion-channel types – they would make an already complex ‘automated network model building’ challenge even more complex.
With Hippocampome.org v2.0 in hand, it is now possible to start bridging the gap between theory and experiment without having to make a heroic effort to parse the experimental literature. That is, theoretical ’bones’ can be given experimental ‘meat’, as Wheeler et al. demonstrate in simulations of grid cells. Essentially, this resource can be used to bind hypothesis-driven and data-driven modeling (Eriksson et al., 2022).
To truly understand how the brain works, and to help the many individuals suffering from brain disorders, there needs to be stronger collaborations between experimentalists and modellers. This new resource developed by Wheeler et al. provides a practical path towards this outcome.
References
-
Sequences and consequencesPhilosophical Transactions of the Royal Society of London. Series B, Biological Sciences 365:207–212.https://doi.org/10.1098/rstb.2009.0221
-
The complexity of small circuits: the stomatogastric nervous systemCurrent Opinion in Neurobiology 41:1–7.https://doi.org/10.1016/j.conb.2016.07.005
-
Simple model of spiking neuronsIEEE Transactions on Neural Networks 14:1569–1572.https://doi.org/10.1109/TNN.2003.820440
-
Forms of explanation and understanding for neuroscience and artificial intelligenceJournal of Neurophysiology 126:1860–1874.https://doi.org/10.1152/jn.00195.2021
Article and author information
Author details
Publication history
Copyright
© 2024, Skinner
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Metrics
-
- 3,094
- views
-
- 313
- downloads
-
- 0
- citations
Views, downloads and citations are aggregated across all versions of this paper published by eLife.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading
-
- Neuroscience
Complex macro-scale patterns of brain activity that emerge during periods of wakeful rest provide insight into the organisation of neural function, how these differentiate individuals based on their traits, and the neural basis of different types of self-generated thoughts. Although brain activity during wakeful rest is valuable for understanding important features of human cognition, its unconstrained nature makes it difficult to disentangle neural features related to personality traits from those related to the thoughts occurring at rest. Our study builds on recent perspectives from work on ongoing conscious thought that highlight the interactions between three brain networks – ventral and dorsal attention networks, as well as the default mode network. We combined measures of personality with state-of-the-art indices of ongoing thoughts at rest and brain imaging analysis and explored whether this ‘tri-partite’ view can provide a framework within which to understand the contribution of states and traits to observed patterns of neural activity at rest. To capture macro-scale relationships between different brain systems, we calculated cortical gradients to describe brain organisation in a low-dimensional space. Our analysis established that for more introverted individuals, regions of the ventral attention network were functionally more aligned to regions of the somatomotor system and the default mode network. At the same time, a pattern of detailed self-generated thought was associated with a decoupling of regions of dorsal attention from regions in the default mode network. Our study, therefore, establishes that interactions between attention systems and the default mode network are important influences on ongoing thought at rest and highlights the value of integrating contemporary perspectives on conscious experience when understanding patterns of brain activity at rest.
-
- Neuroscience
Motivation depends on dopamine, but might be modulated by acetylcholine which influences dopamine release in the striatum, and amplifies motivation in animal studies. A corresponding effect in humans would be important clinically, since anticholinergic drugs are frequently used in Parkinson’s disease, a condition that can also disrupt motivation. Reward and dopamine make us more ready to respond, as indexed by reaction times (RT), and move faster, sometimes termed vigour. These effects may be controlled by preparatory processes that can be tracked using electroencephalography (EEG). We measured vigour in a placebo-controlled, double-blinded study of trihexyphenidyl (THP), a muscarinic antagonist, with an incentivised eye movement task and EEG. Participants responded faster and with greater vigour when incentives were high, but THP blunted these motivational effects, suggesting that muscarinic receptors facilitate invigoration by reward. Preparatory EEG build-up (contingent negative variation [CNV]) was strengthened by high incentives and by muscarinic blockade, although THP reduced the incentive effect. The amplitude of preparatory activity predicted both vigour and RT, although over distinct scalp regions; frontal activity predicted vigour, whereas a larger, earlier, central component predicted RT. The incentivisation of RT was partly mediated by the CNV, though vigour was not. Moreover, the CNV mediated the drug’s effect on dampening incentives, suggesting that muscarinic receptors underlie the motivational influence on this preparatory activity. Taken together, these findings show that a muscarinic blocker impairs motivated action in healthy people, and that medial frontal preparatory neural activity mediates this for RT.