Abstract
Communication in brain networks is the foundation of cognitive function and behavior. A multitude of evolutionary pressures, including the minimization of metabolic costs while maximizing communication efficiency, contribute to shaping the structure and dynamics of these networks. However, how communication efficiency is characterized depends on the assumed model of communication dynamics. Traditional models include shortest path signaling, random walker navigation, broadcasting, and diffusive processes. Yet, a general and model-agnostic framework for characterizing optimal neural communication remains to be established.
Our study addresses this challenge by assigning communication efficiency through game theory, based on a combination of structural data from human cortical networks with computational models of brain dynamics. We quantified the exact influence exerted by each brain node over every other node using an exhaustive multi-site virtual lesioning scheme, creating optimal influence maps for various models of brain dynamics. These descriptions show how communication patterns unfold in the given brain network if regions maximize their influence over one another. By comparing these influence maps with a large variety of brain communication models, we found that optimal communication most closely resembles a broadcasting model in which regions leverage multiple parallel channels for information dissemination. Moreover, we show that the most influential regions within the cortex are formed by its rich-club. These regions exploit their topological vantage point by broadcasting across numerous pathways, thereby significantly enhancing their effective reach even when the anatomical connections are weak.
Our work provides a rigorous and versatile framework for characterizing optimal communication across brain networks and reveals the most influential brain regions and the topological features underlying their optimal communication.
Introduction
In the realm of network science, the human brain represents a perfect example of a complex system. Viewing the brain as a network has uncovered key topological characteristics, including the presence of densely interconnected modules, central hubs, and hierarchical structures in information processing [1,2]. This perspective, particularly obtained by analysis of large-scale structural brain networks — the connectome — elucidated that a range of network characteristics arise from the interplay between two conflicting evolutionary imperatives: minimizing wiring costs on one hand and maximizing signaling efficiency on the other [3–5].
The brain is confined within a three-dimensional space, making the formation and maintenance of long-range connections metabolically expensive. Conversely, a network predominantly comprising short-range connections, although economical in wiring costs, necessitates multiple intermediate nodes for information relay to distant nodes, diminishing effective communication capacity [6,7]. It has been shown that this dichotomy leads to a compromise, where predominantly short-range connections among neighboring regions coexist with a selective set of long-range connections that function as communication shortcuts [8,9].
In this view, signaling efficiency between nodes is graded based on the number of steps from a source to a target, presupposing information transmission exclusively along one path, the shortest path [10]. However, it has been argued that such a conceptualization of signaling unrealistically requires brain regions to possess global knowledge of the network’s connectivity [11]. To address this issue, alternative communication models (CMs) have been developed, accommodating other conceptualizations, such as cascading dynamics and diffusive processes, where the information permeates through the network via parallel pathways. Consequently, alternative conceptualizations of how distant brain regions interact yield diverse interpretations of communication efficiency [12]. For instance, as mentioned above, the shortest path efficiency (SPE), derived from the inverse of the shortest path length, reflects efficiency under the assumption that information flows along the shortest path only. In contrast, diffusion efficiency (DE), defined by the inverse of the average path length of an unbiased random walker traversing from the source to the target, assumes information to propagate along parallel pathways with no specific preference for any of them. Quantifying how much a source can effectively influence its target depends on which perspective is chosen. Given SPE, the source has optimal influence over its unconnected target, if the number of intermediate nodes along the shortest path between them is minimal. Given DE, the source has optimal influence over its unconnected target, if it has the largest number of pathways connected with it, regardless of how long they are. Thus, optimality of signaling depends on assumptions about the regime in which the information propagates in the network.
Moreover, despite this plurality of signaling conceptualizations, these models of propagation simplify communication patterns to linear, discrete interactions, thereby abstracting the complexities inherent in biological systems, such as non-linear interactions, oscillatory behaviors, and conductance delays [12,13]. A plethora of biophysical models have been developed to address such details by modeling the behavior of regions as the average activity of biophysically grounded neuronal populations [14,15]. This mean-field formalization of brain activity comes with its own definitions of communication, such as communication through coherence, positing neuronal synchronization as a pivotal mechanism [16–18].
Therefore, while it is acknowledged that the brain’s structural evolution favors optimal inter-regional communication, a consensus on the precise model of communication within brain networks and subsequently its optimality, remains missing due to conflicting assumptions about signal propagation. Addressing this issue is of particular interest, for several reasons. For example, the identification of structural features that improve information flow in brain networks elucidates their role in disease and may allow targeting them for specific therapeutic interventions. Such insights may also guide the design of neuromorphic systems that achieve a similar level of resource management as the brain [19–21].
To provide a mathematically rigorous definition of optimal communication, we resort to game theory. This field analyzes interactions among agents to offer a normative account, that is, to prescribe how they should behave in a defined scenario, given their goals, the rules of the game, and the consequences of others’ decisions [22]. Notably, game theoretical solutions are well established in fields where players lack ‘agency’. In behavioral ecology, for example, game theoretical models are used to define normative descriptions for how animals should behave in scenarios involving optimizing a goal, such as foraging or reproducing [23,24]. Thus, this theoretical approach is well suited for investigating a pivotal question in neuroscience: How should brain regions interact, considering the limitations imposed by the structural connectome and the behavior of other regions, with the aim of optimizing their communication efficiency? This question is framed by two principal constraints and one overarching objective. The first constraint is imposed by the brain’s network structure, delineating interactions between nodes. The second constraint is the behavior of other nodes, dictating their response to incoming signals. The objective, in this context, is to maximize signaling efficiency – conceptualized as the extent of a node’s influence over others. Such a formulation gives rise to a concrete definition of optimality as an equilibrium point where no region can increase its influence on others any further. Notably, this game-theoretical approach offers a model-agnostic and data-driven toolkit to define communication within brain networks. By ‘model agnostic’ we mean that the game can be constructed using any model of information propagation and local dynamics incorporated into an arbitrary connectome structure. It then defines optimality in a sense that, given the defined game, how much each node can influence other nodes. Note that optimality in this framework depends on the defined game, meaning that optimal communication can differ given different network architectures and models of communication.
In the present work (summarized in Figure 1), we used the human connectome, large-scale models of dynamics, and a game-theoretical perspective of signaling to address three questions: First, how does the communication landscape look like in the state of optimal signal propagation (OSP), where nodes maximize their influence on each other given the structural and dynamical constraints? Second, which model of information propagation aligns best with the data-driven formulation of signaling? And third, which brain regions are the most influential ones and why?
To answer the first question, we used multi-perturbation Shapley value analysis (MSA) and approximately 7,000 million combinations of virtual lesions to precisely quantify the amount of influence exerted by each brain region (node) over every other node. We call this communication landscape the ‘Optimal Influence’ (OI). To answer the second question, we systematically compared putative models of neural communication against the derived OI and found Communicability (CO) [25] to capture a substantial variation observed in OI. CO is a non-conservative cascade model that posits regions to broadcast their signals across multiple parallel pathways. However, our results indicate that CO tends to underestimate the actual influence exerted via longer pathways. Thus, we proposed complementary analytical models that better replicate the OI. Further investigations using a nonlinear model of dynamics and a neural mass model (NMM) allowed us to determine which of the two game constraints more strongly shape neural signaling, the global network architecture or local dynamics. We found that all models of local dynamics yielded effectively the same landscape, suggesting that network topology is a more important constraint than specific models of local dynamics. Finally, to address the third question, we revealed that areas located on the medial surface of the brain, including the precuneus, anterior cingulate cortex, and superior prefrontal cortex, exerted the strongest influence across the brain network. Notably, these regions are components of the cortical rich-club organization, which is hypothesized to act as the backbone of large-scale brain communication. Together, our results suggest that signaling in the human brain is best captured by broadcasting-like dynamics, and to optimize their influence, hub regions propagate information over multiple pathways.
Results
Characterizing the State of Optimal Signal Propagation
To elucidate the concept of OSP, we start with an intuitive example. Consider a scenario where an orchestra attempts to distribute funds raised from a performance. A proposal would be to divide the funds equally among all members, a strategy that leads to the ‘egalitarian value’ by which every musician is paid the same amount [26]. However, considering discrepancies in the effort of individual players — ranging from ensemble players to soloists and a conductor — makes this strategy unfair for those who contributed more. Assuming that everyone is set to maximize their share, the egalitarian proposal is rejected, and counterproposals follow until the optimally fair solution is found that is proportional to the invested contribution of every player. One can see this iterative process as traversing a convex solution space with a fixed point, representing the optimal solution. Any other solution is either incorrect, e.g., the solution results in shares that add up to a different value rather than the raised fund, or suboptimal, meaning that a better solution exists which, given enough time, will be selected eventually. Note that, by definition, there cannot be two equally optimal solutions into which the payoff is divided, while it is possible for multiple players to receive the same share, as with the egalitarian strategy.
This complexity mirrors the challenge we address in our study: accurately decomposing the activity profile of each target node by identifying the precise contribution of each source node. This problem and the space of all possible solutions for a network of three nodes is depicted in (Figure.2. A and B). For every target node in a network of nodes, the space of all possible solutions encompasses dimensions, with the set of potential solutions having an arbitrarily complex structure. Consequently, finding the one optimal solution in this space requires a systematic and extensive exploration of different decomposition solutions. To address this issue, Shapley provided an axiomatic framework that finds the optimal solution (called the Shapley value, see Discussion about its distinction from SHAP values) using information from all possible ways the players can form coalitions [27]. This algorithm serves as an external authority that, instead of navigating the solution space using proposals and counterproposals, computes the contribution of each player by systematically removing them from the game and tracking the game outcome.
To do so, we used the approach of multi-perturbation Shapley value analysis (MSA) [28–30] that applies exhaustive multi-site virtual lesioning across all combinations of source nodes while tracking the resultant changes in the target node, as illustrated in (Figure 2. C.) The approach is analogous to repeatedly performing the same musical piece while each time a subset of players is excluded, thereby discerning each individual’s contribution to the performed track (see, e.g., [31]). For instance, the contribution of a cello player to the song is easily distinguished by taking the difference between two pieces, once with the cello player included and once without. By evaluating all possible combinations of players, higher-order interactions are then accounted for (e.g., contributions of the entire string ensemble), and thus the exact contribution of each player across all possible configurations is inferred (see section Game-theoretical Framework in Materials and Methods for more details). In essence, our framework reveals the unique itemized description of each player’s contribution to the overall outcome. This detailed analysis allows for a precise division of payoffs, ensuring that players have no incentive to deviate from their allocated share [32,33]. The resulting game-theoretical equilibrium state, here termed the optimal signal propagation state, represents the only fair allocation of nodal influence in which no better solution can be found. Note that the ‘equilibrium’ here refers to the unique point in the solution space of the division problem and not the neural space (See Discussion).
In our work, the ‘game’ was conceptualized as a large-scale model of the brain dynamics approximating the empirical functional connectivity (FC) by associating a differential equation to each node of the given structural connectivity (SC). Specifically, utilizing the consensus SC and averaged FC data from 70 healthy young individuals [34,35], we calibrated a linearized Wilson-Cowan model, as detailed in the ‘Large-scale Computational Models of the Brain Dynamics’ section under Materials and Methods. Then, MSA was used to perform systematic in-silico lesioning of combinations of source nodes while tracking the altered activity of a target node. This iterative process over all nodes elucidates the comprehensive influence landscape of every node on every other node within the network (illustrated in Figure 2). This landscape depicts the extent of influence one node exerts over another under the state of OSP.
Figure 2 reveals that brain regions exert influence not only on their directly connected neighbors, but also on distant and unconnected regions. This observation suggests that optimal communication within the brain requires the propagation of signals beyond immediate connections, traversing through intermediate regions to form multi-step processing pathways. This notion is supported by evidence of significant functional coupling between spatially distant, unconnected regions [13]. We then reorganized the OI matrix according to established large-scale functional brain modules, i.e., the resting-state networks, as depicted in Figure 3. We calculated two key metrics: firstly, the average intra- and inter-module influence, which quantifies the normalized aggregate influence that regions belonging to a module exert both internally and on other modules. Secondly, we assessed the heterogeneity of these influences to determine the diversity of communication patterns within and between modules. A low heterogeneity indicates a narrow distribution of normalized total influence among nodes, measured as the standard deviation, whereas high heterogeneity suggests a wide variation in interaction strengths. We then asked whether any modules exhibited deviations from the expected levels of influence and heterogeneity. We derived the expected level via the egalitarian strategy where influence is uniformly distributed across all modules, corresponding to an equal division of %100 of the flow among the nine modules. Our analysis revealed a generally balanced communication landscape, with the largest deviations being approximately %3. However, the default mode network stood out, demonstrating both substantial average intra- and inter-module influence and lower heterogeneity (Figure 3). This finding implies that nodes in the default mode network have a large and relatively similar amount of influence over each other and the rest of the brain. In contrast, the auditory network is characterized by a more varied and modest level of influence, predominantly within its own module, but extending to others as well. Together, the results presented in Figure 3 suggest that while there is a relatively homogeneous level of influence within and between different cognitive modules, the default mode network distinguishes itself as a key orchestrator of brain-wide communication dynamics.
Collectively, our findings in this section indicate that manipulating a single region can impact the global dynamics of the entire brain (also as shown by [36–38]), extending beyond anatomically connected regions. As a result, a critical question arises: Which poly-synaptic signaling conceptualization most accurately encapsulates these observed indirect influences?
Optimal Signaling via Broadcasting Over Parallel Pathways
To determine which model of signal propagation most accurately captures the indirect influence exerted by brain regions on one another, we explored a spectrum of communication models, spanning from routing strategies to diffusive processes [12,39]. Routing strategies postulate that information travels predominantly along, or with respect to the shortest path. As introduced before, SPE is straightforward in its approach, focusing on the shortest path alone. Navigation Efficiency (NE), on the other hand, is a geometrically greedy routing model, aiming to minimize the physical distance to a target node at each step. Search Information (SI) quantifies the amount of information required for a random walker to find and navigate along the shortest path. Therefore, the more the required information, the more difficult the communication between two nodes is assumed to be. Conversely, at the opposite end of the spectrum lie diffusion processes that envision information cascades throughout the network. This could involve longer, even recurrent routes to the same nodes. The efficiency of signaling under these processes is then quantified by CO and DE. CO adopts a broadcasting strategy, whereby information spreads from the source and reaches the target through multiple pathways, while being subjected to exponential decay with each step. This decay inherently reduces the impact of longer pathways. DE, in contrast, does not incorporate attenuation, focusing instead on counting the steps a random walker takes from source to target. For detailed descriptions of these models, refer to the ‘Communication Models’ section under Materials and Methods.
A simple pairwise Pearson’s correlation between OI and each CM matrices showed a large positive correlation between OI and CO (R2=0.88; all p-values in this work are strictly smaller than 0.0001 unless mentioned otherwise) and a large negative relationship with SI (R2=0.68) while other metrics such as SPE, NE, and DE exhibited nonlinear relationships with OI. This finding (Figure 4) suggests that, although all conceptualizations capture a degree of OI, the one assuming an attenuating diffusion dynamic for the signal propagation over multiple pathways most effectively mirrors OI. This result aligns with existing studies indicating CO’s efficacy in capturing the spread of electrical stimulation to unconnected nodes [40], predicting large-scale functional modules more accurately than SC [41], and determining the degree of compensation by other regions following perturbation [42].
However, we noticed that CO slightly underestimates the true influence of longer pathways. To address this limitation, three alternative approaches were explored: adjusting communicability with a scaling factor (scaled communicability; scaled CO) [43], using a linear instead of exponential attenuation (Linear Attenuation Model; LAM), and employing the covariance structure of a spatial autoregressive model (SAR) that considers linear attenuation and regional co-fluctuation. Each of these models incorporates a single adjustable parameter: a scaling factor for scaled CO, an attenuation factor for LAM, and a degree of spatial influence for SAR. After fitting these parameters to replicate OI, we found that, except for SAR, the other two models failed to rectify the issue (as shown in Figure 5. A). While scaled CO and LAM showed marginal improvements in variance explained (R2=0.89 and 0.9, respectively) compared to CO (R2=0.88), SAR achieved a near-perfect performance (R2=0.997). We then fitted SAR to CO instead of OI to evaluate to which extent CO captures the real degree of spatial influence. Varying the SAR parameter indicated that CO’s positioning on the spectrum is considerably skewed towards a steep attenuation (0.06), in contrast to OI’s more moderate rate of spatial decay (0.43). This finding supports our intuition that CO is overly strict with discounting longer pathways, since it assumes the signal to fade almost seven times faster than it actually does. It also indicates that a wide range of discount factors between 0.2 and 0.7 performs similarly well (Figure 5. B), relaxing the need to search for an exact value (same applies to the scaling factor of CO and attenuation factor of LAM; see Supplementary Figure 1).
We subsequently explored the depth of influence in terms of the number of hops, specifically addressing how far a signal travels before its causal impact substantially diminishes. To this end, we used a linear regression model, fitted it to results from progressively diffused signals in the network. Put simply, we iteratively predicted the OI starting from walks of zero length (no spread of influence) and incrementally considered longer walks, applying a monotonic discount to additional steps. The analysis, as depicted in (Figure 6. A), demonstrated peak performance at the sixth step, achieving an R2 of 0.75. This means that the summation of only the first six steps captures %75 of the variance in OI (also see Supplementary Figure 8 for the result from an exponential discount instead of linear). Further, a multivariate Lasso regularized linear regression model was trained on all walks simultaneously to identify a parsimonious combination that most effectively predicts OI (Figure 6. B). Aligning with the result from the univariate model, walks of length five, six, and the eighth steps (excluding seventh) were contributing the most, allowing the model to predict OI with a high degree of accuracy (R2=0.96). These results imply that signal influence notably decreases after approximately eight processing steps, despite the network’s diameter being around ten steps, meaning that the information flow between the most distant nodes is heavily degraded (Figure 6. B). While information from the initial eight steps is sufficient to explain %96 of the OI, the superior predictive performance of models such as SAR, LAM, and CO suggests that incorporating all possible paths yields additional, albeit small, information. Moreover, the real advantage of communication models lies in their relative simplicity compared to using a regularized multivariate model. Altogether, it is important to note that the longer processing pathways that are missed by CO, Scaled CO, and LAM still influence the target node, but based on comparing how much SAR gained in predictive performance, they account for about %5 of the whole communication dynamics. This finding provides a rough estimate of how much mismatch, and in which pathways, are expected when using CO, scaled CO, and LAM in human neuroimaging data. However, when these communication models were applied to mouse and macaque connectomes, a discrepancy in capturing OI in directed networks emerged (Supplementary Figure 2). SAR marginally outperformed LAM and scaled CO in the mouse connectome (r=0.21, 0.18, 0.17, respectively) but showed significantly better performance in the macaque (r=0.92, 0.24, 0.23, respectively). The enhanced performance from mouse to macaque hints at the potential influence of network size, considering the macaque network comprises 29 nodes compared to the mouse’s 112. Nonetheless, the specific effects of network size, density, reciprocity, weight distribution and normalization, as well as other global characteristics on modelling OI necessitate further systematic investigation using synthetic network models. This finding delineates the limitations of current CMs in accurately depicting optimal signal propagation in large, directed networks. However, the observed inverted U-shaped fitting curves of LAM and scaled CO (detailed in Supplementary Figure 2) suggest a slower spatial decay in signal propagation than assumed by models like CO. In essence, despite the struggle of these models to precisely capture the optimal signal flow in larger directed networks, there is a consensus that the influence of regions extends further than traditionally postulated by communication models.
In our final analysis of this section, we sought to determine if other communication models, despite their complex relationship with OI, could surpass CO in predicting it. Additionally, it is possible for more intuitive variables, such as the fiber length or Euclidean distance, to provide better predictive performance. To see if OI is better predicted using available variables, including CMs, another multivariate regularized linear regression model was employed (illustrated in Figure 6. C). Excluding SAR, the model successfully explained %93 of the variance in OI using just three variables: LAM, CO, and SPE, with LAM making the most significant contribution. We then trained a set of models with the same variables but different attenuation factors of LAM and SAR to observe the dynamics of feature importance across different factors. To our surprise, at its optimal parameter value, SAR emerged as the predominant predictor, effectively simplifying the statistical model to a univariate format and once again perfectly predicting OI (as seen in Figure 6. C). SAR maintained its dominance as the most critical variable over a range from 0.1 to 0.8, after which CO became more influential. This shift underscores the role of discounting of the signal, elucidating why DE was the least effective model in our study. The reason is that, at around α = 1where no discount is considered, SAR converges to DE, rendering it uninformative, whereas LAM retains its distinctiveness, surpassing both CO and SAR. Averaged over all values of α shows that SAR was the most important feature, followed by CO and LAM (Figure 6. D) which all follow broadcasting-like communication dynamics.
Collectively, our findings indicate that optimal influence in brain networks requires signal propagation across multiple pathways, akin to a broadcasting approach. The influence diminishes with each processing step, but not as quickly as it is assumed by CO. Moreover, a simple statistical model, i.e., SAR, accurately predicts the network’s influence structure at the optimal signal propagation regime. Importantly, previous studies indicated that SAR performs as well as many other biophysically detailed NMMs in predicting the FC [44,45]. Interestingly, comparing the performance of multivariate models against simple CMs suggests that little is gained from additional variables and more computationally expensive predictive models.
It is important to acknowledge that all CMs, including SAR, are predicated on a linear dynamic model, prioritizing simplicity over biophysical complexity. However, brain regions are argued to perform nonlinear operations, show oscillatory patterns of activity and show metastability by switching between multiple states to support cognition and behavior. Therefore, to affirm the notion of optimal communication in brain networks, it is essential to compare these findings with more realistic dynamics that capture such complex phenomena.
Global Network Topology is More Influential Than Local Node Dynamics
Previous research has demonstrated that linear models of local dynamics are as effective, if not more so, as nonlinear models in capturing the macroscopic dynamics of the brain [44,46–48]. This observation led us to hypothesize that optimal signal propagation in large-scale brain networks, as measured by fMRI-BOLD signal, might also be adequately represented by linear communication models. In other words, we hypothesized that optimal signal propagation should follow the same line of reasoning and be relatively consistent across different dynamical models. The equilibrium state in decomposing contributions should remain consistent across different dynamical models, irrespective of their complexity. This expectation would hold true unless the local dynamics — how regions process incoming signals — impose substantial constraints on information flow within the network.
To evaluate this hypothesis, we incorporated two additional models into our analysis: one using a nonlinear tangent hyperbolic (Tanh) transfer function, contrasting it with the linear function used in the linear model, and the other one a well-established oscillatory NMM known as the Hopf/Stuart-Landau model, operating in its critical metastable regime [49]. Pearson’s correlation conducted to compare these models with the linear model confirmed our hypothesis. As depicted in (Figure 7. A), both the nonlinear and Hopf models showed high correlations with the linear model (R2=0.98 and 0.94, respectively). Further, we investigated the influence of local dynamics on OI by contrasting a model fitted to FC with one that was not, hence producing dynamics that poorly resemble the FC. The resulting scatter plot of OI for these models, as shown in (Figure 7. B), indicated another strong correlation (R2=0.93). Collectively, these findings suggest that the equilibrium state of optimal signal propagation does not substantially vary even when employing more complex dynamical models. The reason is that, as previously found [31], while nonlinear transformations significantly impact the time-varying structure of individual contributions, they do not alter the overall amplitude of them, which in this context, is viewed as the total amount of influence one node exerts over another. Scaling, on the other hand, that determines the magnitude of these contributions, is inherently governed by the structural connectome across all models. Consequently, the total influence of one node on another is predominantly dictated by network topology rather than the specific dynamical model employed. However, the time-resolved pattern of fluctuations is specified by the model, which here is simply neglected as a direct consequence of collapsing the temporal information (refer to Supplementary Figure 3 for more details).
In summary, our findings indicate that linear models of dynamics effectively capture the degree of influence regions exert on each other under the optimal signal propagation state. Findings from this section not only corroborate previous research suggesting that macroscopic dynamics of large-scale brain networks are well approximated by linear models, but also support the hypothesis posited in the preceding section. Together, these two aspects suggest that it is the structural connectivity that shapes the landscape of optimal signaling in brain networks, rather than the models of local dynamics, since OIs are remarkably similar irrespective of the biophysical complexity embedded in models of local dynamics. The equilibrium reached in this game-theoretical framework, as demonstrated, is accurately represented by simplified abstract models of signal propagation. Thus, not only do simple communication models such as SAR perform on a par with NMMs in predicting empirical FC, but also, they represent how information optimally flows in large-scale brain networks.
Rich-Clubs Harness Parallel Pathways to Amplify Their Influence
It has been shown that a set of cortical hubs exerts a relatively large wiring cost to link with other distant regions, forming a rich-club organization [50–52]. Thus, these topologically central nodes are argued to be the backbone of information transfer across multiple distant specialized modules [53,54]. Previous works proposed that these connections play a crucial role in information processing by diversifying areal input-output connectivity [55], leading to a richer functional repertoire [43], or integrating information from the whole brain [56]. Moreover, it was shown that the connections among adjacent regions are stronger than those among distant areas [57,58]. Empirical findings and modeling work have suggested an exponential distant rule by which the strength of connections decays exponentially with spatial distance [59,60]. For instance, in our human connectome dataset, the correlation between structural weights and fiber lengths is -0.6 (See Supplementary Figure 4). Together, these findings pose a question, if and how do rich-club regions keep their line of communication optimal given these relatively weak connection strengths of long-distance projections?
To answer this question, we first investigated the relationship between the strength of connections between pairs of connected regions and the respective influence they assert on each other. Intuitively, the stronger a connection is, the greater its influence, leading to stronger communication between two regions. This relation has also been supported experimentally by comparing the effect of optogenetics stimulation of a region over its connected neighbors [61]. Our result corroborated this finding and showed a strong correlation between the structural weight and the amount of influence nodes assert on their connected neighbors (r=0.95). However, several pair-wise interactions fall well above the regression line (Figure 8. A) suggesting that some regions assert strong influence on others despite their weak connection weights (likewise noticeable in both mouse and macaque connectomes; Supplementary Figure 5. A). This ‘bump’ is also captured by SAR, LAM, and CO (Supplementary Figure 5. B), providing further evidence that these simple models can reproduce optimal information flow among connected regions. As (Figure 8. B, C, and D) shows these connections mostly lie between regions located at the medial surface of the cortex, coinciding with rich-club regions [50,62]. We then computed the amount that each node influences other unconnected regions (Figure 8. F) and found the same regions to be the most influential, providing more evidence that this influence is not related to the weak link itself, but the node’s overall connectivity. To test this hypothesis, we first computed several graph centrality measures, built both on the shortest-path and random walk signaling conceptualization. A Spearman’s rank correlation between how influential a node is and how central it is suggests two conclusions (Figure 8. E): First, all centrality measures show a moderate to strong positive correlation with the amount of influence nodes assert over the rest of the network, directly and indirectly. Second, those with the largest indirect influence (indicated with red) always lie on the top-right of the data cloud, supporting the idea that they are topologically central. We further utilized null and synthetic networks to see if we can delineate the role of connection strength and connectivity pattern in signal propagation. Although less noticeable, the trend persisted when the network topology was shuffled while nodal strength was preserved. It also persisted when the weights were shuffled while the topology was held constant. The trend only disappeared when both topology and weights were allocated randomly in synthetic random networks (Supplementary Figure 5. C, D, and E).
We then compared the edge betweenness centrality of every connection with its strength, under both shortest-path and random walk conceptualization. Shortest-path edge-betweenness centrality (SPEBC) of a link, measures how many shortest paths in the network go through this specific connection, while random walk edge-betweenness centrality (RWEBC) counts the number of times a random walker traversed this link to move from a node to another. While the trend is missing when compared to the connection weight, it is apparent when compared to the influence, suggesting that the effect cannot be captured by how central individual links are given these two signaling strategies (Figure 9. A and B). Together, these findings propose that neither connectivity nor weights alone are responsible for the emergence of efficient signal propagation between weakly connected nodes. But the signature of more influence compared to the underlying weight is only observed when central nodes are allocated with weak links.
Lastly, to further investigate this phenomenon, we systematically changed the degree of spatial influence for a SAR model while calculating the spread of data along the second principal component (PC). The first PC captures the main direction of variability between the influence and connection strength, which is the observed strong relationship. However, the second PC captures how much data points are spread orthogonal to the main axis, indicating the amount of which weak links are influential. In other words, this metric measures the variability around the regression line, a small variability indicates a robust prediction of influence from the connection weight, showing their tight coupling. In contrast, large variability approximates how much communication between two nodes was independent of the connection strength, potentially relying on other parallel pathways (Figure 10). We found that a steep decay factor leads to low variability, as nodes could only influence their connected neighbors before the signal subsides. A shallower decay rate, on the other hand, resulted in more variability since the signal could traverse longer parallel pathways, echoing in the network and reach the target from multiple fronts (Figure 10). This finding provides further evidence for central nodes utilizing not only the direct path but other longer paths to compensate for the weak direct connection strength.
Moreover, we found that these nodes also have both larger average controllability and modal controllability (see Supplementary Figure 6), providing insight about their potential role in steering the dynamics of the brain. Average controllability measures the ability of a node to push the global dynamics into easy-to-reach states, while the modal controllability measures the power of a node to push the network towards farther and hard-to-reach states. It has been shown that the nodal strength has a strong positive correlation with average controllability and a strong negative correlation with modal controllability [63]. Our results also support this finding, with the Spearman’s rank correlation of nodal influence and average controllability being 0.86 and -0.63 for modal controllability. However, the influential nodes are clustered in the top-right corner of the scatter plot for modal controllability (Supplementary Figure 6). This finding suggests that these nodes steer the network to both easy-to-reach and hard-to-reach dynamical states due to their extensive local and long-range connectivity.
Altogether, results from this section propose a mechanism with which hub regions optimally communicate with their distant counterparts, given the structurally weak connections between them. These regions harness their topologically central positions to broadcast their signals not only via one (their direct) connection but other parallel indirect ones to direct the global dynamics of the brain towards different states, even those that are energetically costly to reach. In other words, the weak connection between two ‘peripheral’ nodes results in weak communication, though, this inefficiency is compensated by central nodes that have access to many pathways to amplify their signal.
Discussion
In this work, we explored the characteristics of brain communication in a state of optimal signal propagation. We employed a game-theoretical framework to uncover how nodes influence each other optimally, given a set of constraints. We defined these constraints as the network’s structural topology and its functional activity, which emerged from three increasingly complex large-scale brain models. These models ranged from a linear model, where the output of each node is the weighted sum of its inputs, via a nonlinear model, in which the node modifies the incoming signals using a Tanh function, to a Hopf model that, relative to the other two models, represents more biophysical realism, such as conductance delay, metastability, and oscillatory dynamics of the brain signals, at the cost of greater computational complexity. Our results indicate that the strength by which nodes influence each other depends less on the local node model and more on the global network structure, permitting us to apply a wide range of graph theoretical metrics that fundamentally relate to the linear model. For instance, comparing putative communication models, we found that the optimal signal propagation in brain networks is better captured by frameworks that conceptualize communication to follow a broadcasting strategy. This means that, although signals travel along multiple parallel pathways, they subside after each processing step and degrade after six to eight steps. Finally, we show that topologically central brain regions, such as precuneus, prefrontal cortex, and anterior cingulate cortex, amplify their influence over remote regions by harnessing such parallel pathways. These results have several implications and caveats that are discussed below.
Interpreting Game-theoretical Solutions for Influence Decomposition
Here, we argue for developing normative formulations of brain signaling and network organization. This approach follows similar studies in synthetic brain networks, where a work also using game theory showed how nodes aiming at maximizing the network’s navigability while minimizing their wiring cost organize in structures remarkably similar to empirical brain networks [5]. Interestingly, related work [64] addressed the long-standing question of why six degrees of separation exist in social networks [65]. Using normative modeling provided by game theory, the authors found the reason to be a trade-off between a node’s aspiration to maximize its centrality, while minimizing its connection maintenance cost [64]. In the present work, we also found that the first eight steps of propagation are sufficient to reconstruct OI, with a peak on the sixth step.
Also relevant for our work, the Shapley value has been shown to converge to a Nash equilibrium in some games, for instance, bargaining in which players have to divide a sum among themselves while maximizing their payoffs, allowing us to draw a connection between this value and the notion of the equilibrium state [32]. interestingly, this finding has been experimentally replicated, where subjects played a simplified bidding game and frequently settled on division solutions very similar to their Shapley values [66]. Thus, it is important to note that the equilibrium discussed in the current paper is a game theoretical equilibrium in the solution space (Figure 2. B) and not necessarily associated with the neural dynamics. Related to this point is the notion of optimality. Here, by optimal influence, we refer to the optimal solution for decomposing influences, from the cooperative game theoretical perspective, that translates to the non-cooperative game-theoretical definition of optimality as the largest amount of influence that a node can potentially assert on a target given the game constraints. As depicted in (Figure 2. B.), optimality here refers to the equilibrium point in the solution space. We call it optimal because, as shown before [27], the Shapley value is a unique solution, and in fact the only solution that satisfies three of the four axioms related to fairness [26,67] which guarantees stability and dominance of the solution compared to others. Lastly, the Shapley value is being used in the context of interpretability of machine learning features by frameworks such as SHAP (SHapley Additive exPlanations; see [68,69]). However, what is done in the present work is fundamentally different from SHAP. Although both frameworks compute Shapley values, SHAP does it for input features of machine learning algorithms. In contrast, here we compute the Shapley value of each node of the network, with respect to all other nodes, at every time point. This warrants a different interpretation of the results compared to the SHAP framework.
There is a large body of research aiming at decomposing the amount of influence originating from a set of source nodes on a target node. Some attempts have used information theory, such as causal information [70], and information transfer [71], as well as using Partial Information Decomposition (PID) [72–74]. In PID, the information transferred can be further decomposed into synergistic, redundant and unique contributions. In the current work, we deliberately used the wording of influence instead of information, since a formal link between these two properties still needs to be established (but see [75] for an example of equating them). Our framework, however, does not measure the amount of information transferred as defined by information theory; it also does not explicitly decompose the influence into subparts. What it does is to characterize the amount of influence one node has over another at any time points, given a modeling paradigm (see Supplementary Figure 3). This influence embeds higher order interactions, such as synergistic and redundant interactions, but eventually summarizes them into one value per time point, describing the average contribution of a node to its targets given its contribution to all possible coalitions of sources. Therefore, we believe that our framework complements the rich body of previous research by providing another level of rigorous granularity, specifically, time-resolved exact influence. Further research is needed to establish where the two perspectives of game theory and information theory converge, and how combining them could provide insights about communication in brain networks.
On a related note, due to its reliance on multi-site lesioning, MSA has been used to address the complexity resulting from higher order interactions in the context of causal inference [76–79]. Following the well-established Woodwardian account of causality, that is, the manipulability account [80], MSA at its core is a causal inference framework. The manipulability account of causality [80] states that “C is a cause of E, if manipulating C results in tractable changes in E”. Here we define manipulation as lesioning, as is customary in neuroscience [81–84]. We then address the problem of “counterfactuals” by exploring all possible ways that causes can interact at any time-point to produce an effect, leading to one of the most detailed accounts of causality. Therefore, placing our method in the large family of effective connectivity (EC) methods appears as a natural next step. However, here we decided to opt out of this comparison in favor of communication models, since they conceptualize brain-wide causal interactions, which involve both direct and indirect interactions, uncovering causal pathways instead [85]. Expanding causal relationships to pathways of different lengths allows for multiple causes to interact and jointly produce an effect. Moreover, comparing our findings with those from EC approaches would involve considering a host of methods, not limited to but including spectral dynamic causal modeling [86], partial correlation analysis, Granger causality, dynamic differential covariance [87,88], transfer entropy [71], and so forth. We believe that this worthwhile comparison calls for a separate and extensive analysis that is beyond the scope of the current work.
Finally, our framework is model-agnostic, meaning that the game and its constraints should be defined first. This approach comes with great liberty to explore different dynamical models. However, we acknowledge that this freedom can be a double-edged sword. On the one hand, it allows researchers to investigate a multitude of questions, potentially providing meaningful insight into the mechanism of communication and its role in behavior. On the other hand, it can result in ill-conceived games that lead to misinterpretations. For instance, here the time-resolved influence of all nodes over each other is a three-dimensional array. Conventionally, it would be tempting to average over the time points and produce a time-averaged influence matrix. However, since our models were stationary, the average value of the vector would be highly noisy, providing little insight into “how much” nodes modulate each other’s activity. Instead, we computed the variance to capture this feature, but one might as well compute alternative metrics, such as standard deviation, entropy, energy, where each requires careful interpretation. Moreover, variance alone might be a poor indicator of influence if the network has inhibitory connections, as both biological and artificial neural networks (ANNs) tend to have. By contrast, the present work focuses on conventional neuroimaging datasets with non-negative connection weights.
In sum, it is crucial to define every step of the game carefully to avoid misinterpretations. It is also important to keep in mind the provided definitions for optimality, equilibrium and influence to avoid confusion in interpreting the results of the game-theoretical analysis.
Broadcasting as an Optimal Signaling Strategy
Many previous studies in network neuroscience have relied on a graph theoretical definition of efficiency that is based on the shortest path distance [89]. Here, we show that the underlying signal propagation dynamics in large-scale brain networks is likely to follow a broadcasting regime. This finding implies that broadcasting goes beyond signaling along the shortest path but does not downplay its role. In other words, we show that the shortest path is critical, but not the only path along which a signal traverses. The broadcasting mechanism also circumvents a major problem of the shortest path signaling concept, that nodes would have to find the shortest path to every other node in the network without access to global information on the network organization. By broadcasting, nodes simply transmit their signal via their outgoing connections in every direction. And since the signal degrades over every step, nodes topologically closest to the source receive the signal most strongly. Put differently, navigation along the shortest path emerges naturally and without a need for centralized routing strategies, simply because the closest node receives more signal compared to the farther ones. This line of reasoning is compatible with the result depicted in (Figure 8. E) where closeness centrality predicted the influential nodes with such great accuracy compared to other centrality measures. Additionally, communication models, such as communicability, are thought to be energetically costlier than routing strategies, such as signaling exclusively along the shortest paths [12]. The reason for this argument is the fact that communicability accounts for paths of all length. However, as shown here, paths longer than seven steps are unlikely to be meaningfully contributing. In support of this finding, Griffa and colleagues [90] using information theory found that not only parallel communication is present in the human cortex, but also pathways longer than five steps are not as effective. These observations suggest that regions balance between robustness and energetic cost of signal transmission by using a handful of parallel pathways. Intriguingly, Griffa et al. reported that the capacity for parallel communication is larger in the human cortex compared to macaque and mouse, proposing an evolutionary advantage for broadcasting over multiple pathways [90]. They also found that these pathways are more frequently placed between different functional modules, rather than within them, which aligns with our finding that hub regions harness parallel pathways to compensate for weak structural connections among them.
Another implication of this finding is that metrics such as global efficiency, representing the average shortest-path distance between all nodes, are not ideal for capturing the signaling dynamics of the brain. As Zamora-Lopez and Gilson argue [91], all such graph theoretical metrics assume some sort of dynamics. It is important to first establish the correct dynamics and then apply the corresponding metrics. Prompted by our findings, we would suggest that, when possible, metrics based on the shortest path distance should be replaced with others based on communicability, LAM, or SAR. For instance, instead of using the average shortest path as a measure of global network efficiency, one may compute the average communicability as global broadcasting efficiency. Similarly, closeness centrality may be redefined as the broadcasting strength by taking the average node-wise communicability instead of the average node-wise shortest-path distance.
Next, it is not yet clear whether the broadcasted signal is modified along longer chains in the network, or just relayed to boost the transmitted information. If the signal is modified, then it is likely that these pathways serve a computational role. Previous work suggests that within-module communication involves more redundant information compared to communication across modules which is mainly synergistic [74,92]. Interpreting our work in this perspective, information travels along fewer parallel pathways within a functional module to be processed locally, resulting in higher redundancy. However, signals are likely modified along the parallel pathways and combined with incoming signals from other modules, leading to a greater synergy.
Interestingly, the broadcasting conceptualization is argued to be a better model of information propagation in social networks compared to a random-walker-based navigation [93]. In neuroscience, the broadcasting model of information propagation bears qualitative resemblance with concepts such as traveling waves of excitation [94], bridging a gap between communication models and communication via oscillation. Notably, we found a correlation of 0.23 between OI and FC (Supplementary Figure 9), that is close to what has been previously reported by comparing cortico-cortical evoked potentials (CCEPs) with FC [95]. Moreover, it has been proposed that at least a subset of CCEPs propagate as traveling waves [96]. Together, these works, and our findings suggest that, first, correlation-based measures such as FC capture the propagation of information in brain networks to a limited extent, and second an approach combining CMs with traveling waves could provide biophysical mechanisms for simple signal propagation models based on broadcasting regime such as communicability.
Finally, we found that the rich-club organization of the human cortex is a major contributor to shaping brain-wide communication dynamics. An inspiring work using normative modeling suggested that hierarchies and rich-club organization are spandrels of modules and hubs [97]. Spandrels are evolutionary features that evolve alongside adaptive phenotypes, while themselves serve no adaptive function. In other words, the rich-club organization is proposed to be the byproduct of modular organization while serving no adaptive function by itself [98]. Our work provides evidence that, although the rich club organization could have emerged as a byproduct of modular organization, it might not be a purely ornamental spandrel, as the rich club regions represent the most influential regions of the brain (Figure 8). Previous works relating rich club regions to signaling also indicated an integrative role of these regions, which have longer temporal receptive fields compared to others, allowing them to integrate more information over time [99,100]. Interestingly, our results depicted in Figures 8 and 10 support this idea. We found that rich club regions integrate signals over longer and parallel structural pathways, enhancing their broadcasting capacity. Moreover, a recent empirical work identified disturbances in these regions and their broadcasting efficiency in patients suffering from disorders of consciousness [101]. Put together, these findings suggest a causal functional role for the rich-club organization of the human cerebral cortex.
Role of Network Dynamics in Optimal Signaling
Although our work suggests a greater role for the network’s structure in shaping the state of optimal signal propagation in the network compared to the specifics of the local node dynamics, this finding does not mean that the local dynamics are irrelevant. We believe that the inequality between the network’s structure and dynamics in determining signal flow arises from factors such as the assumed homogeneity of nodes in our network models. Here, nodes are assumed to be identical objects with identical features and dynamics. This assumption certainly does not apply in the brain, and a growing body of network models that incorporate regional differences aim at addressing this issue [102,103]. As mentioned before, our framework is model agnostic, so it is also possible to interrogate networks with heterogeneous models of dynamics. For applying the present approach to heterogeneous networks, we hypothesize that, firstly, how nodes influence one another further decouples from the structure [76], and secondly, simple communication models fail to replicate the OI as well as they have done in this work. Thus, further work can shine light on the interplay between the network’s structure and its dynamics in shaping OI, since not only heterogeneous network models are biologically more plausible, but also, they can help to investigate how information optimally flows in networks where nodes respond differently to incoming signals.
Limitations and Further Directions
A conceptual note is that OI, as a normative framework, describes the landscape of communication dynamics if nodes maximize their influence. As in other game-theoretical models, this assumption might not fully hold. The derived landscape is a prediction that would need to be experimentally tested using in-vivo multi-site lesioning experiments, which is currently not feasible. However, recent work indirectly supports our findings by showing that propagation of electrical stimulation in the human cortex is well captured by communicability [40], which we found to have one of the largest correlations with OI. There are also technical limitations of our framework, specifically its computational cost, since one needs potentially to simulate millions of in-silico lesion combinations for each target node. For instance, in this work we performed around half a billion lesion simulations for just one game, such as a linear model on the human connectome (see section Game-theoretical Framework in Material and Methods). Performing such an intensive analysis is not possible on a simple laptop. Therefore, due to its brute-force approach, our framework provides rigorous results at the expense of high computational effort, apart from the fact that it is yet to be used in-vivo (but see [104]).
Finally, we focused on the influence of nodes on each other without relating these influences to behavior or cognitive function. Moreover, we did not investigate how this influence landscape changes due to brain disorders, aging, and other factors. Future work may take on these challenges, for instance, by tracking the dynamics of OI during neurodegenerative disease progression.
Material and methods
In this study, we used multiple Python libraries, including MSApy [30], Neurolib [105], Netneurotools, BCTpy [106], nctpy [107], and Networkx [108]. The Jupyter notebooks and Python functions to reproduce this work are available at the GitHub repository below: https://github.com/kuffmode/OI-and-CMs
The simulation results are available at the link below:
https://zenodo.org/records/10849223
Additionally, a dedicated user-friendly Python library to compute OI and some communication models is developed and can be found at:
https://github.com/kuffmode/YANAT
All connectomes are available as a part of the Netneurotools library:
https://github.com/netneurolab/netneurotools
Structural and Functional Connectomes
We analyzed three publicly available connectomes of human, macaque, and mouse to address differences in invasive versus non-invasive imaging modalities and reconstruction techniques.
Human Connectome
Structural and functional connectivity datasets were acquired from a cohort of 70 healthy subjects at the Lausanne University Hospital in Switzerland, and the details are described elsewhere [34,35,109]. Briefly, the group comprised individuals with an average age of 28.8 years (standard deviation 9.1 years), including 27 females, and all were scanned using a 3 Tesla MRI machine. A deterministic streamline tractography was employed to create individual SC matrices from each participant’s diffusion spectrum imaging (DSI) data, which were recorded at five distinct levels of brain parcellation. However, here we have used only one with relatively high-resolution (219 cortical regions). The strength of each structural connection within these matrices was approximated by the density of fiber tracts.
To mitigate the size disparities between brain regions and counteract the inherent preference for longer fiber tracts in the tractography technique, a normalization procedure was applied. This involved adjusting the streamline counts by the average surface areas of the connected regions and the mean streamline lengths. The culmination of this process was the formation of a unified, binary group-average SC matrix, which was derived using a consensus strategy that ensured the individual edge length distributions were conserved [34,110].
Functional brain data were gathered from the same group of subjects through resting-state functional MRI (rs-fMRI) scans conducted with open eyes. The initial processing steps of this functional data included adjustments for several physiological variables, notably the influence of white matter, cerebrospinal fluid, and motion artifacts that comprise three axes of translation and rotation as determined by rigid body co-registration. Following these corrections, the blood oxygen level-dependent (BOLD) time series were subjected to a low pass filtering process, utilizing a temporal Gaussian filter set to a full width half maximum of 1.92 seconds. The analysis excluded the first four temporal points from each scan and further data refinement was achieved through high-motion frame censoring, a method detailed in Power and colleagues [111].
For the assessment of FC, the study calculated zero-lag Pearson correlation coefficients, which measured the functional relationships between pairs of brain regions in each individual’s rs-fMRI time series. Lastly, the group-average FC matrix was computed as the mean of these individual pairwise connectivity values, which includes negative values as well.
Macaque Connectome
The connectivity matrix for the macaque was derived from retrograde tract-tracing experiments and is provided by [59]. These experiments involved the use of fluorescent tracers in a group of 28 macaque monkeys. The projections that were reconstructed from these experiments were mapped according to a division of 91 cortical areas. This division was based on a combination of histological examinations and atlas-based references. In each tract-tracing experiment, Markov and colleagues quantified the number of neurons that were labeled in each of these 91 areas. The count of labeled neurons was then adjusted by subtracting the number of neurons that were native to the site of tracer injection. The outcome of this process was a 29×91 matrix, which represents the connection weights extending from each injection site to other regions of the brain. Here, we focused on the subset of this matrix that describes the weighted and directed connections between 29 cortical areas, leading to a 29×29 SC matrix.
Mouse Connectome
The connectivity matrix for the mouse was compiled using tract-tracing data openly accessible from the Allen Institute Mouse Brain Connectivity Atlas [112]. Briefly, the process involved injecting anterograde recombinant adeno-associated viral tracers into designated areas within the right hemisphere of mouse brains. Three weeks following the injection, during which the viral tracer projections developed, the brains were extracted for reconstruction. These reconstructions were then standardized and aligned with the Allen Reference Atlas’ common coordinate framework.
Nodes in the network were identified based on a specialized parcellation derived from the Allen Developing Mouse Brain Atlas. This parcellation initially included 65 areas in each hemisphere, but 9 areas were excluded as they did not participate in any tract-tracing experiments. Consequently, the network that was analyzed constituted 112 regions [113]. Edges represent axonal projections between different areas, and they were quantified as normalized connection densities. Specifically, this measure represents the number of connections per unit volume from a source area to a target area.
Large-scale Computational Models of the Brain Dynamics
In this work, we employed three models with increasing biological fidelity. The most abstract and simplistic model is the linear model, where the output of a node is a weighted sum of its inputs. The nonlinear model extends the linear model by applying a nonlinear transformation to the input (here, tanh(·)). And lastly, the neural mass model describes nodes as Stuart-Landau oscillators. Below are the details of each model.
Linear and nonlinear Model
The linear model we used, also known as the multivariate Ornstein-Uhlenbeck process, follows the continuous dynamical system equations described in [114].
Independent Gaussian noise ξ with mean zero and unit variance were presented to each node. Here, τ denotes the relaxation time of the node, which was set to 0.02. It should be noted that, in practice, the adjacency matrix is normalized to ensure that its largest eigenvalue | λmax| = 1. Consequently, the practical decay rate aligns with the product of the relaxation time, τ, and the maximum eigenvalue, λmax. σ in denotes the strength of the noise and was set to 0.05 in all experiments. Given the empirical functional connectivity matrices, we optimized the coupling parameter, g, to maximize correlation between empirical and simulated FCs. With our modeling setting, g = 0.74 demonstrated the best fit with a correlation of 0.23. The nonlinear model modifies the linear one simply by adding a nonlinear transformation and a larger input noise of 2 instead of 0.05 to go beyond the linear part of the function, leading to the equation below:
Neural mass model
Further, we explored SC networks governed by Hopf/Stuart-Landau equations [49]. This model, recognized as the canonical approach for examining the shift from noisy to oscillatory dynamics, elucidates the behavior of a nonlinear oscillating system near the Hopf bifurcation. In essence, the dynamics of each network node are captured by the following complex equation:
Where zi = xi + jyi is a complex representation of the node state, with ξI (t) representing Gaussian noise characterized by a standard deviation σin = 0.05. This system undergoes a supercritical bifurcation at ai = 0, where it shifts from a stable fixed point (i.e.,zi = 0) to a limit cycle oscillation with frequency fi = wi /2π.
By decomposing the complex state into its Cartesian components and incorporating the influence of other nodes in the input of each node, we derived a set of coupled equations (Eq. 4) to govern the entire brain dynamics. This formulation enables the representation of the influence of nodes on each other’s temporal state through a diffusive interaction, where g serves as a global coupling factor.
Similar to the other two models, the global dynamics of the network are influenced by model’s parameters. Here, we precisely tuned the global coupling, g and the bifurcation parameter, ai, to match the empirical FC. Setting g at 5.6 and ai = 0.15 for all nodes, yields a correlation of 0.35. Other parameters, as detailed in the Neurolib library, remain unaltered as follows. The model features a diffusive coupling type, a signal velocity of 20.0 m/s, a global coupling strength set at 5.8, a 5.0 ms Ornstein-Uhlenbeck timescale, Ornstein-Uhlenbeck noise intensity fixed at 0.05 mV/ms/sqrt(ms), a mean value of Ornstein-Uhlenbeck process maintained at 0.0 mV/ms, a Hopf bifurcation parameter set to 0.15, and an oscillator frequency of 32 Hz. We collected state variable,x, representing signals acquired from various brain regions for subsequent analyzes.
Game-theoretical Framework
MSA is built upon Shapley values, which quantify a player’s fair share of a collectively generated outcome (see Figure 2). Generally, Shapley values are calculated by adding a player to all possible coalitions and observing the value they bring to the coalition. Formally, the contribution of player i to a coalition S is given by:
Where v(S) represents the value of coalition S. The Shapley value γi is the average of these contributions across all permutations R of the player set:
where ℛ is the set of all permutations, and Si (R) is the coalition formed by ordering R up to player i. However, due to the computational complexity of calculating Shapley values for large sets, MSA relies on an unbiased estimator by sampling permutations. We employed a sample size m ≪ N!, generating m × N unique permutations for every target region, where N is the number of source regions.
In this work, we iterated this process over every node (target node) by systematically lesioning other nodes (source nodes) and tracking the time-resolved difference between when the sources were lesioned and when they were not. Lesioning was modeled by setting the incoming and outgoing connections of sources to zero. For every source node, we sampled 1,000 permutations. However, we confirmed that the algorithm converged by comparing it against a larger sample size of 10,000, which resulted in a correlation coefficient of 1.0 (supplementary Figure 7). We also ran the analysis for 10 repetitions and averaged the resulting influence matrices. The final matrix has a shape of (number of regions × number of regions × simulation time) that we reduced to a two-dimensional matrix (number of regions × number of regions) by taking the variance of each influence profile. Altogether, each of the experiments on the human connectome resulted in roughly 480 million in-silico lesions (219 targets x 219 sources × 1,000 combination of lesions per source for each target × 10 trials). Due to the computational implausibility of the neural mass model, we ran the analysis only once instead of 10 repetitions, thus 48 million lesions. Together with all control analyzes, the total number of in-silico lesions for the human connectome amounts to 7,200 million:
480m for the linear model, 480m for the nonlinear, 48m for the Hopf, 480m for the topology-shuffled null model, 480m for the weight-shuffled null model, 4,800 for the case where we used 10,000 lesion combinations per source for each target, and 480m for a test case where the coupling was set to zero. All experiments were conducted on a high-performance computing facility provided by the institute of computational neuroscience, university hospital of Hamburg.
Communication Models and Measures
In this section, we introduce the communication models and measures employed to analyze the dynamics of information flow within the studied network. The corresponding formulas and procedures for computation are presented in Table 1.
The structural connectivity matrix W ∈ ℝN×N denotes the strength of pairwise connections between N brain regions. We define the connection length matrix L = 1/W, where Lij is the travel cost between regions i and j. This conversion from connection weights to lengths is required for network communication models that optimize for minimal transmission cost of signals, e.g., shortest path efficiency.
Shortest Path Efficiency
Shortest Path Efficiency measures the effectiveness of communication along the most direct route between two nodes in a network. Here, the Floyd-Warshall algorithm was employed to determine the sequence of regions Ωij = {i,u,…,v,j} that minimizes the total transmission cost for signals traveling between regions i and j. This cost, denoted by , is defined as the sum Liu + … +Lvj . Subsequently, the Shortest Path Efficiency (SPE) between two regions is quantified as the reciprocal of the minimum transmission cost, expressed as as detailed in [10].
Navigation Efficiency
Navigation within the network employs a greedy protocol that assumes signal transmission to minimize the inter-regional Euclidean distance. The process involves iterative progression from a source region i to a target region j. At each step, the neighbor spatially closest to j is chosen as the next node in the path [115]. This sequence continues until the target is reached, marking successful navigation, or a previously visited node is encountered, indicating a failure in navigation. The cumulative length of a successful path is denoted by ,Λij = Liu + … + Lvi ,where Ωij = {i,u,…,v,j} represents the sequence of nodes traversed. If navigation fails Λij, is set to infinity. Navigation efficiency is thus defined as the reciprocal of the traversed path length.
Diffusion Efficiency
The Diffusion Efficiency (DE) model characterizes signaling through the lens of random walks. The computation of DE involves the utilization of the transition matrix, P within a Markov chain process unfolding on the connection weight matrix, W. Specifically, it considers the probability, pij, that a simple random walker at node i will advance to node j and the mean first passage time, tij which quantifies the expected number of intermediate regions visited in a random walk. To elaborate on the computation procedure, please refer to Table 1. Additionally, for a more in-depth theoretical understanding, consult references [116,117].
Search Information
Search information is a metric that evaluates the required amount of information to push a random walker towards the shortest path, thereby reflecting the accessibility of efficient communication routes under a diffusive model. Computing SI involves finding the above-mentioned shortest path,Ωij between two regions, and the probability,Πij , that a random walker will accidentally traverse from region i to region j following this path. This probability is computed using the transition matrix, P within a Markov chain process unfolding on the connection weight matrix. Table 1 summarizes the computation procedure.
Communicability and Scaled Communicability
The measure of communicability between nodes, i and j , is expressed as the weighted sum of the overall pathways connecting them, where each walk’s contribution is weighted proportionally to the inverse of its length—signifying the number of connections traversed. In practice, before computing communicability, nonbinary connection weight matrices are commonly normalized. This normalization step is employed to diminish the impact of highly influential nodes with substantial strength. Refer to Table 1 for the computation procedure and references [25,91] for additional theoretical details. We also reported the scaled communicability measure, which introduces the modulating parameter, to control the decay rate.
Linear Attenuation Model
The Linear attenuation model (LAM) follows the same line of reasoning as communicability, that information propagation can be represented as a weighted sum of all walks in the network. The difference lies in how walk-lengths are discounted. In contrast to communicability, where the sum of walks is estimated through the exponentiation of the adjacency matrix, Katz introduced an attenuation factor,α , to reduce the influence over every step monotonically. Katz proposed a closed-form expression, given by ( 𝕀 − αW−1) [118]. It is important to note that the convergence condition in this context is , where λmax represents the spectral radius of the adjacency matrix, W. Prior to computing the Linear Attenuation Model (LAM), the adjacency matrix undergoes a normalization process, similar to the one applied in the computation of communicability.
Spatial Autoregressive Model
Moreover, we incorporated the spatial autoregressive (SAR) model into our analysis. This model, characterized as a generic representation of diffuse processes on networks, is intricately linked to the distribution of paths within the network [43,44,55,119]. In this model, the fluctuating signals at each node are interconnected through a system of structural equations, expressing each signal as a linear function of others. This interconnection is influenced by a global coupling strength factor denoted as α. Importantly, when the network experiences input noise, the SAR model predicts that each signal follows a multivariate normal distribution. This prediction is mathematically represented by the covariance matrix (𝕀 − αŴ)−1 (𝕀 − αŴ)−T The term Ŵ denotes the normalized adjacency matrix, where each element ŵij is obtained by dividing the original connection strength, wij by the sum of all connection strengths in the corresponding row of the matrix W. 𝕀 is the identity matrix and T stands for the transposed inverse matrix.
Null and Simulated Network Models
We employed two simulated weighted networks and two null models as described below:
Erdős-Rényi model
An Erdős-Rényi random graph, denoted as G(N,P), was generated with 100 nodes, where p = 0.35 represents the probability that any two distinct nodes are connected by an edge. The graph was constructed by iterating over all possible pairs of nodes and connecting them with an edge with the probability p. This resulted in a binary adjacency matrix W where Wij is 1 if nodes i and j are connected and 0 otherwise. After generating the binary structure of the graph, weights were assigned to the edges by sampling from a log-normal distribution. The weight Wij for each edge in the graph was drawn from In 𝒩 (μ,σ2)where μ = 1 and σ = 0.5. The log-normal distribution was chosen for its property of providing a multiplicative effect, appropriate for modeling local dynamical systems coupled in a network.
Barabási-Albert model
The Barabási-Albert model was used to generate synthetic scale-free networks through a preferential attachment mechanism. As with the Erdős-Rényi model, we generated a network with 100 nodes. Each new node added to the network creates m = 20 edges to existing nodes, with a preference for nodes that already have a higher degree, thus simulating the “rich get richer” phenomenon and producing hubs in the network. In this Barabási-Albert model, the probability that a new node will be connected to a node i depends on the degree ki, according to the rule:
Where Σj kj is the sum of degrees of all existing nodes at the time of attachment. Similar to the Erdős-Rényi model, the edges of the resulting binary network were weighted by sampling from a log-normal distribution with parameters μ = 1 and σ = 0.5. Moreover, to model weakly connected hubs, all hubs in the network were fully connected with weights assigned from a separate log-normal distribution characterized by both mean and standard deviation equal to 0.1. This simulates the scenario where hubs have additional connections among themselves with relatively smaller weights, representing the observed weaker links of the rich-club in the brain.
Topology-conserving null model
To investigate the impact of connection strength on network dynamics while maintaining the underlying topology, a weight shuffling procedure was implemented. The adjacency matrix was kept unaltered to preserve the original connectivity pattern, while the weights were redistributed to randomize the strength of the connections. This process was defined as follows:
Given a weighted adjacency matrix,W , with fixed topology, the set of non-zero weights corresponding to the edges in W was permuted. The permutation was performed in such a manner that each edge received a new weight from the set, ensuring that the sum of the weights and the overall weight distribution remained unchanged. This shuffling procedure was iterated 10 times, yielding 10 distinct weighted networks with identical topologies but randomized weight configurations.
Weight-conserving null model
To study the impact of topology, we used a degree-, weight- and strength-preserving model introduced by [120] in which the connectivity was shuffled while the total degree and strength of the nodes were preserved. We chose the default parameters provided by the BCTpy toolbox, however, the correlation between the strength sequence of pre- and post-rewired network was r=0.97 that implies a robust rewiring. As per Rubinov and Sporns [120], the rewiring algorithm consists of two steps:
The network is randomized while maintaining each node’s degree using a connection-switching algorithm, here Maslov-Sneppen [121]. In this process, connections are adjusted by switching pairs of edges in a way that preserves the sum of weights for the involved nodes.
Original network weights are then reassigned to the randomized network. This reassignment is done by first ranking all weights by magnitude and then associating them with the network’s connections to approximate the original distribution of weights. During this process, weights are iteratively matched and re-ranked until all connections in the new network closely resemble the original positive and negative strengths.
Multivariate Statistical Model
We conducted two sets of Lasso regularized multivariate regression models (Figure 6. B and C). In one, we systematically explored the dynamics of feature importance in predicting the optimal signal propagation and using the other set, we investigated the number of steps needed to predict the optimal signal propagation.
For the first set, we constructed a feature matrix X encompassing all communication measures, topological and geodesic distance measures, FC, structural weights, and fiber length between regions.
A range of 50 (see Table 1) values from 0 to 1, delineating the degree of spatial influence, was explored. For each α, the SAR and LAM models were computed and subsequently integrated into the feature matrix X. A total of 25 repetition runs were performed for each α, wherein the dataset was randomly partitioned into training and testing subsets of proportion 0.7-0.3 using a 10-folds cross validation approach. The training set was subjected to standard scaling before model fitting. The statistical model was assessed through the coefficient of determination R2 . The absolute values of the regression coefficients obtained from the model were normalized to sum to 100, reflecting the relative contribution of each feature to the model. These contributions were recorded for each α and averaged over all trials to ascertain the consistent predictors.
For the second set, a range of 15 steps were generated by first raising the structural connectivity matrix to the power of 0 to 15 and then discounting the longer paths according to the optimal discount factor of the LAM model (α = 0.64). For each step, 100 repetitions were performed following the same cross-validation, evaluation, and feature importance ranking introduced above.
Graph-theoretical Measures
Several graph-theoretical measures were employed in this study that are summarized in Table 2 and briefly explained here.
Closeness Centrality
Closeness centrality is a measure reflecting the average shortest path length from a given node to all other nodes in the network. It quantifies how “close” a node is to all other nodes, which can indicate the node’s efficiency in spreading information through the network. As with the SPE, the shortest path distance was calculated using the Floyd-Warshall algorithm.
Eigenvector Centrality
Eigenvector centrality extends the concept of centrality by not only considering the number of connections a node has, but also the centrality of its neighbors [122,123]. It assigns relative scores to all nodes in the network, based on the principle that connections to high-scoring nodes contribute more to the score of the node than equal connections to low-scoring nodes. As detailed in Table 2, this quantity at the site of i-th node is calculated as the i-th component of the eigenvector of the adjacency matrix.
Shortest path Node and Edge Betweenness Centrality
Shortest path betweenness centrality measures the extent to which a node lies on the shortest paths between other nodes in the network. It captures the influence of a node over the flow of information in the network by identifying nodes that frequently act as bridges along the shortest paths between other nodes [124]. Subsequently, the shortest path edge betweenness centrality quantifies the number of times an edge acts as a bridge along the shortest path between two nodes. It reflects the importance of an edge in facilitating communication within the network. To elaborate on the computation of these metrics, please refer to Table 2.
Random-walk Node and Edge Betweenness Centrality
Random walk betweenness centrality, also known as current flow betweenness centrality, is also a measure that quantifies the node’s role in facilitating information flow in the network [125]. However, unlike shortest path betweenness centrality, which only considers the shortest paths, random walk centrality is based on the probability that a random walk starting at a source node will pass through a given node before reaching the destination node. For detailed information on the computation of Random-walk Node and Edge Betweenness Centrality based on the current flow notion, refer to Table 2.
Average and Modal Controllability
Average controllability measures the ability of a node to steer the system into many easy-to-reach states, given a discrete dynamical system. It is particularly useful in understanding the ease with which any state of the system can be reached from a given initial state. Similarly, modal controllability represents the ability of a node to steer the system into farther and hard-to-reach states. In computing the controllability measure, we used Schur decomposition to find the unitary matrix, U, and the upper triangular matrix,T , to express the adjacency matrix, W. the Computation procedure is summarized in Table 2.
Acknowledgements
The funding is gratefully acknowledged: K.F: German Research Foundation (DFG)-SFB 936-178316478-A1; TRR169-A2; SPP 2041/GO 2888/2-2. F.H: DFG TRR169-A2. SD: SFB 936-178316478-A1. A.M: SFB 936-178316478-A1. B.M: the Natural Sciences and Engineering Research Council of Canada (NSERC Discovery Grant RGPIN #017-04265). The Brain Canada Future Leaders Fund and the Canadian Institutes of Health Research (CIHR). C.S: N/A; G.Z: N/A; C.H: SFB 936-178316478-A1; TRR169-A2; SFB 1461/A4; SPP 2041/HI 1286/7-1, the Human Brain Project, EU (SGA2, SGA3).
Supplementary information
References
- 1.Organization, development and function of complex brain networksTrends Cogn Sci 8:418–425
- 2.Complex brain networks: graph theoretical analysis of structural and functional systemsNat Rev Neurosci 10:186–198
- 3.Generative models of the human connectomeNeuroimage 124:1054–1064
- 4.Nonoptimal component placement, but short processing paths, due to long-distance projections in neural systemsPLoS Comput Biol 2
- 5.Navigable networks as Nash equilibria of navigation gamesNat Commun 6
- 6.The Influence of Wiring Economy on Nervous System EvolutionCurr Biol 26:R1101–R1108
- 7.The economy of brain network organizationNat Rev Neurosci 13:336–349
- 8.Small-World Brain Networks RevisitedNeuroscientist 23:499–516
- 9.Trade-off between multiple constraints enables simultaneous formation of modules and hubs in neural systemsPLoS Comput Biol 9
- 10.Efficient behavior of small-world networksPhys Rev Lett 87
- 11.Communication dynamics in complex brain networksNat Rev Neurosci 19:17–33
- 12.Brain network communication: concepts, models and applicationsNat Rev Neurosci https://doi.org/10.1038/s41583-023-00718-5
- 13.Linking Structure and Function in Macroscale Brain NetworksTrends Cogn Sci 24:302–315
- 14.How structure sculpts function: Unveiling the contribution of anatomical connectivity to the brain’s spontaneous correlation structureChaos 27
- 15.Dynamic models of large-scale brain activityNat Neurosci 20:340–352
- 16.Principles of large-scale neural interactionsNeuron 111:987–1002
- 17.Rhythms for Cognition: Communication through CoherenceNeuron 88:220–235
- 18.Optimal information transfer in the cortex through synchronizationPLoS Comput Biol 6https://doi.org/10.1371/journal.pcbi.1000934
- 19.A neuronal least-action principle for real-time learning in cortical circuitsbioRxiv https://doi.org/10.1101/2023.03.25.534198
- 20.Paying the brain’s energy billCurr Opin Neurobiol 78
- 21.The cost of cortical computationCurr Biol 13:493–497
- 22.Game Theory: A Very Short IntroductionOUP Oxford
- 23.Game Theory and Animal BehaviorUSA: Oxford University Press
- 24.An Introduction to Behavioural EcologyJohn Wiley & Sons
- 25.Communicability in complex networksPhysical Review E https://doi.org/10.1103/physreve.77.036111
- 26.Handbook of the Shapley ValueCRC Press
- 27.A value for n-person gamesContributions to the theory of games Princeton, NJ: Princeton University Press :307–317
- 28.Causal localization of neural function: the Shapley value methodNeurocomputing 58:215–222
- 29.Fair attribution of functional contribution in artificial and biological networksNeural Comput 16:1887–1915
- 30.MSA: A compact Python package for Multiperturbation Shapley value Analysishttps://doi.org/10.5281/zenodo.5636435
- 31.Downstream network transformations dissociate neural activity from causal functional contributionsSci Rep 14
- 32.Bargaining Foundations of Shapley ValueEconometrica 57:81–95
- 33.Bidding for the Surplus : A Non-cooperative Approach to the Shapley ValueJ Econ Theory 100:274–294
- 34.Structural and functional connectome from 70 young healthy adults [data set]Zenodo
- 35.Consensus Structural and functional connectome from 70 young healthy adultshttps://doi.org/10.5281/zenodo.3067849
- 36.On Global Brain Reconfiguration after Local ManipulationsbioRxiv https://doi.org/10.1101/2023.09.08.556815
- 37.On imputing function to structure from the behavioural effects of brain lesionsPhilos Trans R Soc Lond B Biol Sci 355:147–161
- 38.The Rhesus Monkey Connectome Predicts Disrupted Functional Networks Resulting from Pharmacogenetic Inactivation of the AmygdalaNeuron 91:453–466
- 39.Network communication models improve the behavioral and functional predictive utility of the human structural connectomeNetw Neurosci 4:980–1006
- 40.Communication dynamics in the human connectome shape the cortex-wide propagation of direct electrical stimulationbioRxiv https://doi.org/10.1101/2022.07.05.498875
- 41.Network communication models narrow the gap between the modular organization of structural and functional brain networksbioRxiv https://doi.org/10.1101/2022.02.18.480871
- 42.Optimally controlling the human connectome: the role of network topologySci Rep 6
- 43.Functional complexity emerging from anatomical constraints in the brain: the significance of network modularity and richclubsSci Rep 6
- 44.Relating structure and function in the human brain: relative contributions of anatomy, stationary dynamics, and non-stationaritiesPLoS Comput Biol 10
- 45.Predicting functional connectivity from structural connectivity via computational models using MRI: an extensive comparison studyNeuroimage 111:65–75
- 46.Network diffusion accurately models the relationship between structural and functional brain connectivity networksNeuroimage 90:335–347
- 47.Macroscopic resting-state brain dynamics are best described by linear modelsNat Biomed Eng https://doi.org/10.1038/s41551-023-01117-y
- 48.Structural basis of envelope and phase intrinsic coupling modes in the cerebral cortexNeuroimage 276
- 49.The dynamics of resting fluctuations in the brain: metastability and its dynamical cortical coreSci Rep 7
- 50.Rich-Club Organization of the Human ConnectomeJournal of Neuroscience 31:15775–15786
- 51.Cortical hubs form a module for multisensory integration on top of the hierarchy of cortical networksFront Neuroinform 4
- 52.Exploring brain function from anatomical connectivityFront Neurosci 5
- 53.Rich club organization of macaque cerebral cortex and its role in network communicationPLoS One 7
- 54.High-cost, high-capacity backbone for global brain communicationProc Natl Acad Sci U S A 109:11372–11377
- 55.Specificity and robustness of long-distance connections in weighted, interareal connectomesProc Natl Acad Sci U S A 115:E4880–E4889
- 56.The strength of weak connections in the macaque cortico-cortical networkBrain Struct Funct 220:2939–2951
- 57.Comprehensive computational modelling of the development of mammalian cortical connectivity underlying an architectonic type principlePLoS Comput Biol 14
- 58.Heavy-tailed neuronal connectivity arises from Hebbian self-organizationNat Phys https://doi.org/10.1038/s41567-023-02332-9
- 59.The role of long-range connections on the specificity of the macaque interareal cortical networkProc Natl Acad Sci U S A 110:5187–5192
- 60.A predictive network model of cerebral cortical connectivity based on a distance ruleNeuron 80:184–197
- 61.Whole-brain mapping of effective connectivity by fMRI with cortex-wide patterned optogeneticsNeuron https://doi.org/10.1016/j.neuron.2023.03.002
- 62.Aberrant Frontal and Temporal Complex Network Structure in Schizophrenia: A Graph Theoretical AnalysisJ Neurosci 30:15915–15926
- 63.Controllability of structural brain networksNat Commun 6
- 64.Why are there six degrees of separation in a social network?arXiv [physics.soc-ph]
- 65.An experimental study of the small world problemSoc Networks
- 66.Cost of Complexity in Implementing the Shapley Value by Choosing a Proposer Through a Bidding ProcedureInstitute of Social and Economic Research, Osaka University
- 67.The Shapley value: essays in honor of Lloyd SShapley Cambridge University Press
- 68.A unified approach to interpreting model predictionsAdv Neural Inf Process Syst :4765–4774
- 69.Algorithms to estimate Shapley value feature attributionsarXiv [cs.LG]
- 70.INFORMATION FLOWS IN CAUSAL NETWORKSAdvances in Complex Systems :17–41https://doi.org/10.1142/s0219525908001465
- 71.Inferring network properties from time series using transfer entropy and mutual information: Validation of multivariate versus bivariate approachesNetwork Neuroscience :1–32
- 72.Partial Information Decomposition Reveals the Structure of Neural RepresentationsarXiv [cs.IT]
- 73.Bits and pieces: understanding information decomposition from part-whole relationships and formal logicProceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 477
- 74.Information decomposition and the informational architecture of the brainTrends Cogn Sci https://doi.org/10.1016/j.tics.2023.11.005
- 75.Information Decomposition based on Cooperative Game TheoryarXiv [cs.IT]
- 76.Causal Influences Decouple From Their Underlying Network Structure In Echo State Networks2022 International Joint Conference on Neural Networks (IJCNN) :1–8
- 77.Systematic perturbation of an artificial neural network: A step towards quantifying causal contributions in the brainPLoS Comput Biol 18
- 78.Ground-truth validation of uni- and multivariate lesion inference approachesbioRxiv https://doi.org/10.1101/2023.03.28.534534
- 79.Game-theoretical mapping of fundamental brain functions based on lesion deficits in acute strokeBrain Commun 3
- 80.Making Things Happen: A Theory of Causal ExplanationUSA: Oxford University Press
- 81.The return of the lesion for localization and therapyBrain https://doi.org/10.1093/brain/awad123
- 82.Human Lesion Studies in the 21st CenturyNeuron 90:1151–1153
- 83.Causal mapping of human brain functionNat Rev Neurosci 23:361–375
- 84.Lesion Studies in Contemporary NeuroscienceTrends Cogn Sci :1–19
- 85.Causal Selection and the Pathway ConceptPhilos Sci 85:551–572
- 86.Inferring neural signalling directionality from undirected structural connectomesNat Commun 10
- 87.Dynamical differential covariance recovers directional network structure in multiscale neural systemsProc Natl Acad Sci U S A 119
- 88.A new approach for estimating effective connectivity from activity in neural networksbioRxiv https://doi.org/10.1101/2024.02.05.578871
- 89.Graph theory methods: applications in brain networksDialogues Clin Neurosci
- 90.Evidence for increased parallel information transmission in human brain networks compared to macaques and male miceNat Commun 14https://doi.org/10.1038/s41467-023-43971-z
- 91.An integrative dynamical perspective for graph theory and the study of complex networksarXiv [physics.soc-ph]
- 92.A synergistic core for human brain evolution and cognitionNat Neurosci 25:771–782
- 93.Non-Conservative Diffusion and its Application to Social Network AnalysisarXiv [cs.SI]
- 94.Cortical travelling waves: mechanisms and computational principlesNat Rev Neurosci 19:255–268
- 95.Intrinsic functional architecture predicts electrically evoked responses in the human brainProc Natl Acad Sci U S A 108:10308–10313
- 96.Subsets of cortico-cortical evoked potentials propagate as traveling wavesbioRxiv https://doi.org/10.1101/2023.03.27.534002
- 97.Constraints and spandrels of interareal connectomesNat Commun 7:1–11
- 98.The spandrels of San Marco and the Panglossian paradigm: a critique of the adaptationist programmeProceedings of the Royal Society of London Series B Biological Sciences 205:581–598
- 99.A Hierarchy of Temporal Receptive Windows in Human CortexJournal of Neuroscience 28:2539–2550
- 100.A Large-Scale Circuit Mechanism for Hierarchical Dynamical Processing in the Primate CortexNeuron 88:419–431
- 101.Whole-brain analyses indicate the impairment of posterior integration and thalamo-frontotemporal broadcasting in disorders of consciousnessHum Brain Mapp 44:4352–4371
- 102.Towards a biologically annotated brain connectomeNat Rev Neurosci https://doi.org/10.1038/s41583-023-00752-3
- 103.Dynamical consequences of regional heterogeneity in the brain’s transcriptional landscapeSci Adv 7https://doi.org/10.1126/sciadv.abf4752
- 104.Causal functional contributions and interactions in the attention network of the brain: an objective multi-perturbation analysisBrain Struct Funct 221:2553–2568
- 105.neurolib: A Simulation Framework for Whole-Brain Neural Mass ModelingCognit Comput 15:1132–1152
- 106.Complex network measures of brain connectivity: uses and interpretationsNeuroimage 52:1059–1069
- 107.nctpy: Network Control Theory for Pythonhttps://doi.org/10.5281/zenodo.7383161
- 108.Exploring network structure, dynamics, and function using NetworkXLos Alamos National Laboratory (LANL), Los Alamos, NM (United States)
- 109.Spatial Patterning of Tissue Volume Loss in Schizophrenia Reflects Brain Network ArchitectureBiol Psychiatry 87:727–735
- 110.Distance-dependent consensus thresholds for generating group-representative structural brain networksNetw Neurosci 3:475–496
- 111.Spurious but systematic correlations in functional connectivity MRI networks arise from subject motionNeuroimage 59:2142–2154
- 112.A mesoscale connectome of the mouse brainNature 508:207–214
- 113.Wiring cost and topological participation of the mouse brain connectomeProc Natl Acad Sci U S A 112:10032–10037
- 114.On how network architecture determines the dominant patterns of spontaneous neural activityPLoS One 3
- 115.Navigation of brain networksProc Natl Acad Sci U S A 115:6297–6302
- 116.Network landscape from a Brownian particle’s perspectivePhys Rev E Stat Nonlin Soft Matter Phys 67
- 117.Exploring the morphospace of communication efficiency in complex networksPLoS One 8
- 118.A new status index derived from sociometric analysisPsychometrika 18:39–43
- 119.A measure for brain complexity: relating functional segregation and integration in the nervous systemProc Natl Acad Sci U S A 91:5033–5037
- 120.Weight-conserving characterization of complex functional brain networksNeuroimage 56:2068–2079
- 121.Specificity and stability in topology of protein networksScience 296:910–913
- 122.Network centrality in the human functional connectomeCereb Cortex 22:1862–1875
- 123.Consistency and differences between centrality measures across distinct classes of networksPLoS One 14
- 124.A Set of Measures of Centrality Based on BetweennessSociometry 40:35–41
- 125.A measure of betweenness centrality based on random walksSoc Networks 27:39–54
Article and author information
Author information
Version history
- Preprint posted:
- Sent for peer review:
- Reviewed Preprint version 1:
Copyright
© 2024, Fakhar et al.
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Metrics
- views
- 78
- downloads
- 0
- citations
- 0
Views, downloads and citations are aggregated across all versions of this paper published by eLife.