BrainPy, a flexible, integrative, efficient, and extensible framework for general-purpose brain dynamics programming

  1. Chaoming Wang
  2. Tianqiu Zhang
  3. Xiaoyu Chen
  4. Sichao He
  5. Shangyang Li
  6. Si Wu  Is a corresponding author
  1. Peking University, China
  2. Beijing Jiaotong University, China

Abstract

Elucidating the intricate neural mechanisms underlying brain functions requires integrative brain dynamics modeling. To facilitate this process, it is crucial to develop a general-purpose programming framework that allows users to freely define neural models across multiple scales, efficiently simulate, train, and analyze model dynamics, and conveniently incorporate new modeling approaches. In response to this need, we present BrainPy. BrainPy leverages the advanced just-in-time (JIT) compilation capabilities of JAX and XLA to provide a powerful infrastructure tailored for brain dynamics programming. It offers an integrated platform for building, simulating, training, and analyzing brain dynamics models. Models defined in BrainPy can be JIT compiled into binary instructions for various devices, including Central Processing Unit (CPU), Graphics Processing Unit (GPU), and Tensor Processing Unit (TPU), which ensures high running performance comparable to native C or CUDA. Additionally, BrainPy features an extensible architecture that allows for easy expansion of new infrastructure, utilities, and machine-learning approaches. This flexibility enables researchers to incorporate cutting-edge techniques and adapt the framework to their specific needs

Data availability

BrainPy is distributed via the pypi package index (https://pypi.org/project/brainpy/) and is publicly released on GitHub (https://github.com/brainpy/BrainPy/) under the license of GNU General Public License v3.0. Its documentation is hosted on the free documentation hosting platform Read the Docs (https://brainpy.readthedocs.io/). Rich examples and illustrations of BrainPy are publicly available at the website of https://brainpy-examples.readthedocs.io/. The source codes of these examples are available at https://github.com/brainpy/examples/. All the codes to reproduce the results in the paper can be found at the following GitHub repository: https://github.com/brainpy/brainpy-paper-reproducibility/.

The following previously published data sets were used

Article and author information

Author details

  1. Chaoming Wang

    School of Psychological and Cognitive Sciences, Peking University, Beijing, China
    Competing interests
    The authors declare that no competing interests exist.
  2. Tianqiu Zhang

    School of Psychological and Cognitive Sciences, Peking University, Beijing, China
    Competing interests
    The authors declare that no competing interests exist.
  3. Xiaoyu Chen

    School of Psychological and Cognitive Sciences, Peking University, Beijing, China
    Competing interests
    The authors declare that no competing interests exist.
  4. Sichao He

    Beijing Jiaotong University, Beijing, China
    Competing interests
    The authors declare that no competing interests exist.
  5. Shangyang Li

    School of Psychological and Cognitive Sciences, Peking University, Beijing, China
    Competing interests
    The authors declare that no competing interests exist.
  6. Si Wu

    School of Psychological and Cognitive Sciences, Peking University, Beijing, China
    For correspondence
    siwu@pku.edu.cn
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0001-9650-6935

Funding

Peking University (2021ZD0200204)

  • Si Wu

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Copyright

© 2023, Wang et al.

This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 1,882
    views
  • 322
    downloads
  • 16
    citations

Views, downloads and citations are aggregated across all versions of this paper published by eLife.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Chaoming Wang
  2. Tianqiu Zhang
  3. Xiaoyu Chen
  4. Sichao He
  5. Shangyang Li
  6. Si Wu
(2023)
BrainPy, a flexible, integrative, efficient, and extensible framework for general-purpose brain dynamics programming
eLife 12:e86365.
https://doi.org/10.7554/eLife.86365

Share this article

https://doi.org/10.7554/eLife.86365

Further reading

    1. Biochemistry and Chemical Biology
    2. Computational and Systems Biology
    Shinichi Kawaguchi, Xin Xu ... Toshie Kai
    Research Article

    Protein–protein interactions are fundamental to understanding the molecular functions and regulation of proteins. Despite the availability of extensive databases, many interactions remain uncharacterized due to the labor-intensive nature of experimental validation. In this study, we utilized the AlphaFold2 program to predict interactions among proteins localized in the nuage, a germline-specific non-membrane organelle essential for piRNA biogenesis in Drosophila. We screened 20 nuage proteins for 1:1 interactions and predicted dimer structures. Among these, five represented novel interaction candidates. Three pairs, including Spn-E_Squ, were verified by co-immunoprecipitation. Disruption of the salt bridges at the Spn-E_Squ interface confirmed their functional importance, underscoring the predictive model’s accuracy. We extended our analysis to include interactions between three representative nuage components—Vas, Squ, and Tej—and approximately 430 oogenesis-related proteins. Co-immunoprecipitation verified interactions for three pairs: Mei-W68_Squ, CSN3_Squ, and Pka-C1_Tej. Furthermore, we screened the majority of Drosophila proteins (~12,000) for potential interaction with the Piwi protein, a central player in the piRNA pathway, identifying 164 pairs as potential binding partners. This in silico approach not only efficiently identifies potential interaction partners but also significantly bridges the gap by facilitating the integration of bioinformatics and experimental biology.

    1. Computational and Systems Biology
    2. Neuroscience
    Brian DePasquale, Carlos D Brody, Jonathan W Pillow
    Research Article Updated

    Accumulating evidence to make decisions is a core cognitive function. Previous studies have tended to estimate accumulation using either neural or behavioral data alone. Here, we develop a unified framework for modeling stimulus-driven behavior and multi-neuron activity simultaneously. We applied our method to choices and neural recordings from three rat brain regions—the posterior parietal cortex (PPC), the frontal orienting fields (FOF), and the anterior-dorsal striatum (ADS)—while subjects performed a pulse-based accumulation task. Each region was best described by a distinct accumulation model, which all differed from the model that best described the animal’s choices. FOF activity was consistent with an accumulator where early evidence was favored while the ADS reflected near perfect accumulation. Neural responses within an accumulation framework unveiled a distinct association between each brain region and choice. Choices were better predicted from all regions using a comprehensive, accumulation-based framework and different brain regions were found to differentially reflect choice-related accumulation signals: FOF and ADS both reflected choice but ADS showed more instances of decision vacillation. Previous studies relating neural data to behaviorally inferred accumulation dynamics have implicitly assumed that individual brain regions reflect the whole-animal level accumulator. Our results suggest that different brain regions represent accumulated evidence in dramatically different ways and that accumulation at the whole-animal level may be constructed from a variety of neural-level accumulators.