Connectome-Based Attractor Dynamics Underlie Brain Activity in Rest, Task, and Disease
Key Points:
- We present a simple yet powerful phenomenological model for large-scale brain dynamics
- The model uses a functional connectome-based Hopfield artificial neural network (fcHNN) architecture to compute recurrent “activity flow” through the network of brain regions
- fcHNN attractor dynamics accurately reconstruct the several characteristics of resting state brain dynamics
- fcHNNs conceptualize both task-induced and pathological changes in brain activity as a non-linear alteration of these dynamics
- Our approach is validated using large-scale neuroimaging data from seven studies
- FcHNNs offers a simple and interpretable computational alternative to conventional descriptive analyses of brain function
Abstract
Understanding large-scale brain dynamics is a grand challenge in neuroscience. We propose functional connectome-based Hopfield Neural Networks (fcHNNs) as a model of macro-scale brain dynamics, arising from recurrent activity flow among brain regions. An fcHNN is neither optimized to mimic certain brain characteristics, nor trained to solve specific tasks; its weights are simply initialized with empirical functional connectivity values. In the fcHNN framework, brain dynamics are understood in relation to so-called attractor states, i.e. neurobiologically meaningful low-energy activity configurations. Analyses of 7 distinct datasets demonstrate that fcHNNs can accurately reconstruct and predict brain dynamics under a wide range of conditions, including resting and task states and brain disorders. By establishing a mechanistic link between connectivity and activity, fcHNNs offers a simple and interpretable computational alternative to conventional descriptive analyses of brain function. Being a generative framework, fcHNNs can yield mechanistic insights and hold potential to uncover novel treatment targets.
Introduction¶
Brain function is characterized by the continuous activation and deactivation of anatomically distributed neuronal populations Buzsaki, 2006. Irrespective of the presence or absence of explicit stimuli, brain regions appear to work in concert, giving rise to a rich and spatiotemporally complex fluctuation Bassett & Sporns, 2017. This fluctuation is neither random nor stationary over time Liu & Duyn, 2013Zalesky et al., 2014. It is organized around large-scale gradients Margulies et al., 2016Huntenburg et al., 2018 and exhibits quasi-periodic properties, with a limited number of recurring patterns known as “brain states” Greene et al., 2023Vidaurre et al., 2017Liu & Duyn, 2013. A wide variety of descriptive techniques have been previously employed to characterize whole-brain dynamics Smith et al., 2012Vidaurre et al., 2017Liu & Duyn, 2013Chen et al., 2018. These efforts have provided accumulating evidence not only for the existence of dynamic brain states but also for their clinical significance Hutchison et al., 2013Barttfeld et al., 2015Meer et al., 2020. However, the underlying driving forces remain elusive due to the descriptive nature of such studies.
Conventional computational approaches attempt to solve this puzzle by going all the way down to the biophysical properties of single neurons, and aim to construct a model of larger neural populations, or even the entire brain Breakspear, 2017. These approaches have shown numerous successful applications Murray et al., 2018Kriegeskorte & Douglas, 2018Heinz et al., 2019. However, such models need to estimate a vast number of neurobiologically motivated free parameters to fit the data. This hampers their ability to effectively bridge the gap between explanations at the level of single neurons and the complexity of behavior Breakspear, 2017. Recent efforts using coarse-grained brain network models Schirner et al., 2022Schiff et al., 1994Papadopoulos et al., 2017Seguin et al., 2023 and linear network control theory Chiêm et al., 2021Scheid et al., 2021Gu et al., 2015 opted to trade biophysical fidelity to phenomenological validity. Such models have provided insights into some of the inherent key characteristics of the brain as a dynamic system; for instance, the importance of stable patterns, so-called “attractor states”, in governing brain dynamics Deco et al., 2012Golos et al., 2015Hansen et al., 2015. While attractor networks have become established models of micro-scale canonical brain circuits in the last four decades Khona & Fiete, 2022, these studies highlighted that attractor dynamics are essential characteristics of macro-scale brain dynamics as well. However, the standard practice among these studies is the use of models that capitalize on information about the structural wiring of the brain, leading to the grand challenge of modeling the relationship between structural pathways and polysynaptic functional connectivity.
The “neuroconnectionist” approach Doerig et al., 2023 makes another step towards trading biophysical detail for “cognitive/behavioral fidelity” Kriegeskorte & Douglas, 2018, by using artificial neural networks (ANNs) that are trained to perform various tasks, as brain models. However, the need to train ANNs for specific tasks inherently limits their ability to explain task-independent, spontaneous neural dynamics Richards et al., 2019.
Here we propose a minimal phenomenological model for large-scale brain dynamics that combines the advantages of large-scale attractor network models Golos et al., 2015, neuroconnectionism Doerig et al., 2023, and recent advances in undersanding the flow of brain activity across regions Cole et al., 2016, to investigate brain dynamics.
Like neuroconnectionism, we utilize an ANN as an abstract, high-level computational model of the brain. However, our model is not explicitly trained for a specific task. Instead, we set its weights empirically. Specifically, we employ a continuous-space Hopfield Neural Network (HNN) Hopfield, 1982Krotov, 2023, similar to the spin-glass and Hopfield-style attractor network models applied e.g. by Deco et al. (2012) or Golos et al. (2015), where the nodes of the network model represent large-scale brain areas. However, in contrast to these previous efforts that start from the structural wiring of the brain, we initialize the edge weights of the network based on direct estimates of node-to-node information transfer, as measured with fMRI. Our decision to capitalize on a direct proxy of interregional communication, rather than structural pathways, is motivated by the “activity flow” principle Cole et al., 2016Ito et al., 2017, a thoroughly validated phenomenological model for the association between brain activity and functional connectivity. This allows us to circumvent the necessity of comprehensively understanding and accurately modeling structural-functional coupling in the brain. Instead, we can concentrate on the overarching dynamical properties of the system.
Based on the topology of the functional connectome, our model establishes an energy level for any arbitrary activation patterns and determines a “trajectory of least action” towards one of the finite number of attractor states, that minimize this energy. In the proposed framework, macro-scale brain dynamics can be conceptualized as an intricate, high-dimensional path on the energy landscape (Figure 1C), arising from the activity flow Cole et al., 2016 within the functional connectome and constrained by the “gravitational pull” of the attractor states of the system. The generative nature of the proposed framework offers testable predictions for the effect of various perturbations and alterations of these dynamics, from task-induced activity to changes related to brain disorders.
In the present work, we investigate how well the functional connectome is suited to be an attractor network, map the corresponding attractor states and model itinerant stochastic dynamics traversing the different basins of attraction of the system. We use a diverse set of experimental, clinical and meta-analytic studies to evaluate our model’s ability to reconstruct various characteristics of resting state brain dynamics, as well as its capacity to detect and explain changes induced by experimental conditions or alterations in brain disorders.
Results¶
Connectome-based Hopfield network as a model of brain dynamics¶
First, we investigated the functional connectome as an attractor network in a sample of n=41 healthy young participants (study 1, see Methods Table 1 for details). We estimated interregional activity flow Cole et al., 2016Ito et al., 2017 as the study-level average of regularized partial correlations among the resting state fMRI timeseries of m = 122 functional parcels of the BASC brain atlas (see Methods for details). We then used the standardized functional connectome as the weights of a fully connected recurrent ANN, specifically a continuous-state Hopfield network Hopfield, 1982Koiran, 1994, consisting of neural units, each having an activity . Hopfield networks can be initialized by an arbitrary activation pattern (consisting of activation values) and iteratively updated (i.e. “relaxed”) until their energy converges a local minimum, that is, to one of the finite number of attractor states (see Methods). The relaxation procedure is based on a simple rule; in each iteration, the activity of a region is constructed as the weighted average of the activities of all other regions, with weights defined by the connectivity between them. The average is then transformed by a sigmoidal activation function, to keep it in the desired [-1,1] interval. This can be expressed by the following equation:
where is the activity of neural unit in the next iteration and is the sigmoidal activation function ( in our implementation) and is the bias of unit and β is the so-called temperature parameter. For the sake of simplicity, we set in all our experiments. We refer to this architecture as a functional connectivity-based Hopfield Neural Network (fcHNN). The relaxation of a fcHNN model can be conceptualized as the repeated application of the activity flow principle Cole et al., 2016Ito et al., 2017 , simultaneously for all regions: . The update rule also exhibits analogies with network control theory Gu et al., 2015 and the inner workings of neural mass models, as applied e.g. in dynamic causal modeling Daunizeau et al., 2012.
Hopfield networks assign an energy value to each possible activity configuration Hopfield, 1982Koiran, 1994, which decreases during the relaxation procedure until reaching an equilibrium state with minimal energy (Figure 2A, top panel). We used a sufficiently large number of random initializations (n=100000) to obtain all possible attractor states of the connectome-based Hopfield network in study 1 (Figure 2A, bottom panel). Consistent with theoretical expectations, we observed that increasing the temperature parameter β led to an increasing number of attractor states (Figure 2E, left, Supplementary Figure 3), appearing in symmetric pairs (i.e. , see Figure 2G).
FcHNNs, without any modifications, always converge to an equilibrium state. To incorporate stochastic fluctuations in neuronal activity Robinson et al., 2005, we introduced weak Gaussian noise to the fcHNN relaxation procedure. This procedure, referred to as stochastic relaxation, prevents the system from reaching equilibrium and, somewhat similarly to stochastic DCM Daunizeau et al., 2012, induces complex, heteroclinic system dynamics (Figure 2B).
In order to enhance interpretability, we obtained the first two principal components (PCs) of the states sampled from the stochastic relaxation procedure. On the low-dimensional embedding, which we refer to as the fcHNN projection, we observed a clear separation of the attractor states (Figure 2C), with the two symmetric pairs of attractor states located at the extremes of the first and second PC. To map the attractors’ basins on the space spanned by the first two PCs (Figure 2C), we obtained the attractor state of each point visited during the stochastic relaxation and fit a multinomial logistic regression model to predict the attractor state from the first two PCs. The resulting model accurately predicted attractor states of arbitrary brain activity patterns, achieving a cross-validated accuracy of 96.5% (permutation-based p<0.001). The attractor basins were visualized by using the decision boundaries obtained from this model. (Figure 2C). We propose the 2-dimensional fcHNN projection depicted on (Figure 2C) as a simplified representation of brain dynamics, and use it as a basis for all subsequent analyses in this work.
Panel D on Figure 2 uses the fcHNN projection to visualize the conventional Hopfield relaxation procedure. It depicts the trajectory of individual activation maps (sampled randomly from the timeseries data in Study 1) until converging to one of the four attractor states. Panel E shows that the system does not converge to an attractor state anymore if weak noise is introduced to the system (stochastic relaxation), The resulting path is still influenced by the attractor states’ gravity, resulting in a heteroclinic dynamics that resembles the empirical timeseries data (example data on panel F).
In study 1, we have investigated the convergence process of the functional connectivity-based HNN and contrasted it with a null model based on permuted variations of the connectome (while retaining the symmetry of the matrix). This null model preserves the sparseness and the degree distribution of the connectome, but destroys its topological structure (e.g. clusteredness). We found that the topology of the original (unpermuted) functional brain connectome makes it significantly better suited to function as an attractor network; than the permuted null model. While the original connectome based HNN converged to an attractor state in less than 150 iterations in more than 50% of the cases, the null model did not reach convergence in more than 98% of the cases, even after 10000 iterations (Figure 2G, Supplementary Figure 4). This result was robustly observed, independent of the temperature parameter beta. We set the temperature parameter for the rest of the paper to a value providing the fastest convergence (, median iterations: 107), resulting in 4 distinct attractor states. The primary motivation for selecting was to reduce computational burden for further analyses. However, as with increasing temperature, attractor states emerge in a nested fashion (i.e. the basin of a new attractor state is fully contained within the basin of a previous one), we expect that the results of the following analyses would be, although more detailed, but qualitatively similar with higher β values.
We optimized the noise parameter σ of the stochastic relaxation procedure for 8 different σ values over a logarithmic range between and 1 so that the similarity (the timeframes distribution over the attractor basins) is maximized between the empirical data and the fcHNN-generated data. We contrasted this similarity with two null-models (Figure 2H). First we generated null-data as random draws from a multivariate normal distribution with co-variance matrix set to the functional connectome’s covariance matrix (partial correlation-based connectivity estimates). This serves as a baseline for generating data that optimally matches the empirical data in terms of distribution and spatial autocorrelation, as based on information on the underlying co-variance structure (and given Gaussian assumptions), but without any mechanistic model of the generative process, e.g. without modelling any non-linear and non-Gaussian effects and temporal autocorrelations stemming from recurrent activity flow). We found that The fcHNN only reached multistability with , and it provided more accurate reconstruction of the real data than the null model for and (p=0.007 and 0.015, dissimilarity: 11.16 and 21.57, respectively). With our second null model, we investigated whether the fcHNN-reconstructed data is more similar to the empirical data than synthetic data with identical spatial autocorrelation structure (generated by spatial phase randomization of the original volumes, see Methods). We found that the fcHNNs significantly outperform this null model with all investigated σ values if (p<0.001 for all) Based on this coarse optimization procedure, we set for all subsequent analyses.
Reconstruction of resting state brain dynamics¶
The spatial patterns of the obtained attractor states exhibit high neuroscientific relevance and closely resemble previously described large-scale brain systems. (Figure 3A). The first pair of attractors (mapped on PC1, horizontal axis) represent two complementary brain systems, that have been previously found in anatomical, functional, developmental, and evolutionary hierarchies, as well as gene expression, metabolism, and blood flow, (see Sydnor et al. (2021) for a review), an reported under various names, like intrinsic and extrinsic systems Golland et al., 2008, Visual-Sensorimotor-Auditory and Parieto-Temporo-Frontal “rings” Cioli et al., 2014, “primary” brain states Chen et al., 2018, unimodal-to-transmodal principal gradient Margulies et al., 2016Huntenburg et al., 2018 or sensorimotor-association axis Sydnor et al., 2021. A common interpretation of these two patterns is that they represent (i) an “extrinsic” system linked to the immediate sensory environment and (ii) an “intrinsic” system for higher-level internal context, commonly referred to as the default mode network Raichle et al., 2001. The other pair of attractors spans an orthogonal axis, and resemble to patterns commonly associated with perception–action cycles Fuster, 2004, and described as a gradient across sensory-motor modalities Huntenburg et al., 2018, recruiting regions associated with active (e.g. motor cortices) and perceptual inference (e.g visual areas).
The discovered attractor states demonstrate high replicability (mean Pearson’s correlation 0.93) across the discovery dataset (study 1) and two independent replication datasets (study 2 and 3, Figure 3C). Moreover, they were found to be significantly more robust to noise added to the connectome than nodal strengths scores (used as a reference, see Supplementary Figure 8 for details).
Further analysis in study 1 showed that connectome-based Hopfield models accurately reconstructed multiple characteristics of true resting-state data. First, the two axes (first two PCs) of the fcHNN projection accounted for a substantial amount of variance in the real resting-state fMRI data in study 1 (mean ) and generalized well to out-of-sample data (study 2, mean ) (Figure 3E). The explained variance of the fcHNN projection significantly exceeded that of the first two PCs derived directly from the real resting-state fMRI data itself ( and 0.364 for in- and out-of-sample analyses).
Second, during stochastic relaxation, the fcHNN model was found to spend approximately three-quarters of the time on the basis of the first two attractor states and one-quarter on the basis of the second pair of attractor states (approximately equally distributed between pairs). We observed similar temporal occupancies in the real data Figure 3D left column), statistically significant with two different null models (Supplementary Figure 5). Fine-grained details of the bimodal distribution observed in the real resting-state fMRI data were also convincingly reproduced by the fcHNN model (Figure 3F and Figure 2D, second column).
Third, not only spatial activity patterns but also timeseries generated by the fchNN are similar to empirical timeseries data. Next to the visual similarity shown on Figure 2E and F, we observed a statistically significant similarity between the average trajectories of fcHNN-generated and real timeseries “flow” (i.e. the characteristic timeframe-to-timeframe transition direction), as compared to null-models of zero temporal autocorrelation (randomized timeframe order, Figure 3D, third column Methods for analysis details).
Finally, fcHNNs were found to generate signal that preserves the covariance structure of the real functional connectome, indicating that dynamic systems of this type (including the brain) inevitably “leak” their underlying structure into the activity time series, strengthening the construct validity of our approach (Figure 3D).
An explanatory framework for task-based brain activity¶
Next to reproducing various characteristics of spontaneous brain dynamics, fcHNNs can also be used to model responses to various perturbations. We obtained task-based fMRI data from a study by Woo et al. (2015) (study 4, n=33, see Figure 3), investigating the neural correlates of pain and its self-regulation.
We found that activity changes due to pain (taking into account hemodynamics, see Methods) were characterized on the fcHNN projection by a shift towards the attractor state of action/execution (permutation test for mean projection difference by randomly swapping conditions, p<0.001, Figure 4A, left). Energies, as defined by the fcHNN, were also significantly different between the two conditions (p<0.001), with higher energies during pain stimulation.
When participants were instructed to up- or downregulate their pain sensation (resulting in increased and decreased pain reports and differential brain activity in the nucleus accumbens, NAc (see Woo et al. (2015) for details), we observed further changes of the location of momentary brain activity patterns on the fcHNN projection (p<0.001, Figure 4A, right), with downregulation pulling brain dynamics towards the attractor state of internal context and perception. Interestingly, self-regulation did not trigger significant energy changes (p=0.36).
Next, we conducted a “flow analysis” on the fcHNN projection, quantifying how the average timeframe-to-timeframe transition direction differs on the fcHNN projection between conditions (see Methods). This analysis unveiled that during pain (Figure 4B, left side), brain activity tends to gravitate towards a distinct point on the projection on the boundary the basins of the internal and action attractors, which we term the “ghost attractor” of pain (similar to Vohryzek et al. (2020)). In case of downregulation (as compared to upregulation), brain activity is pulled away from the pain-related “ghost attractor” (Figure 4C, left side), towards the attractor of internal context.
Our fcHNN was able to accurately reconstruct these non-linear dynamics by adding a small amount of realistic “control signal” (similarly to network control theory, see e.g. Liu et al. (2011) and Gu et al. (2015)). To simulate the alterations in brain dynamics during pain stimulation, we acquired a meta-analytic pain activation map Zunhammer et al., 2021 (n=603) and incorporated it as a control signal added to each iteration of the stochastic relaxation procedure. The ghost attractor found in the empirical data was present across a relatively wide range of signal-to-noise (SNR) values (Supplementary Figure 6). Results with SNR=0.005 are presented on Figure 4B, right side (Pearson’s r = 0.46, p=0.005 based on randomizing conditions on a per-participant basis).
The same model was also able to reconstruct the observed non-linear differences in brain dynamics between the up- and downregulation conditions (Pearson’s r = 0.62, p=0.023) without any further optimization (SNR=0.005, Figure 4C, right side). The only change we made to the model was the addition (downregulation) or subtraction (upregulation) of control signal in the NAc (the region in which Woo et al., 2015 observed significant changes between up- and downregulation), introducing a signal difference of ΔSNR=0.005 (the same value we found optimal in the pain-analysis). Results were reproducible with lower NAc SNRs, too (Supplementary Figure 7).
To provide a comprehensive picture on how tasks and stimuli other than pain map onto the fcHNN projection, we obtained various task-based meta-analytic activation maps from Neurosynth (see Methods) and plotted them on the fcHNN projection (Figure 4E). This analysis reinforced and extended our interpretation of the four investigated attractor states and shed more light on how various functions are mapped on the axes of internal vs. external context and perception vs. action. In the coordinate system of the fcHNN projection, visual processing is labeled “external-perception”, sensory-motor processes fall into the “external-active” domain, language, verbal cognition and working memory belongs to the “internal-active” region and long-term memory as well as social and autobiographic schemata fall into the “internal-perception” regime (Figure 4F).
Clinical relevance¶
We obtained data from n=172 autism spectrum disorder (ASD) and typically developing control (TDC) individuals, acquired at the New York University Langone Medical Center, New York, NY, USA (NYU) and generously shared in the Autism Brain Imaging Data Exchange dataset (study 7: ABIDE, Di Martino et al., 2014. After excluding high-motion cases (with the same approach as in Study 1-4, see Methods), we visualized the distribution of time-frames on the fcHNN-projection separately for the ASD and TDC groups (Figure 5A). First, we assigned all timeframes to one of the 4 attractor states with the fcHNN from study 1 and found several significant differences in the mean activity on the attractor basins (see Methods) of the ASD group as compared to the respective controls (Figure 5B). Strongest differences were found on the “action-perception” axis (Table 1), with increased activity of the sensory-motor and middle cingular cortices during “action-execution” related states and increased visual and decreased sensory and auditory activity during “perception” states, likely reflecting the widely acknowledged, yet poorly understood, perceptual atypicalities in ASD Hadad & Schwartz, 2019. ASD related changes in the internal-external axis were characterized by more involvement of the posterior cingulate, the precuneus, the nucleus accumbens, the dorsolateral prefrontal cortex (dlPFC), the cerebellum (Crus II, lobule VII) and inferior temporal regions during activity of the internalizing subsystem (Table 1). While similar, default mode network (DMN)-related changes have often been attributed to an atypical integration of information about the “self” and the “other” Padmanabhan et al., 2017, a more detailed fcHNN-analysis may help to further disentangle the specific nature of these changes.
Table 1:The top ten largest changes in average attractor-state activity between autistic and control individuals. Mean attractor-state activity changes are presented in the order of their absolute effect size. All p-values are based on permutation tests (shuffling the group assignment) and corrected for multiple comparisons (via Bonferroni’s correction). For a comprehensive list of significant findings, see {numref}`Supplementary Figure %s <si_clinical_results_table>.
region | attractor | effect size | p-value |
---|---|---|---|
primary auditory cortex | perception | -0.126 | <0.0001 |
middle cingulate cortex | action | 0.109 | <0.0001 |
cerebellum lobule VIIb (medial part ) | internal context | 0.104 | <0.0001 |
mediolateral sensorimotor cortex | perception | -0.099 | 0.00976 |
precuneus | action | 0.098 | <0.0001 |
middle superior temporal gyrus | perception | -0.098 | <0.0001 |
frontal eye field | perception | -0.095 | <0.0001 |
dorsolateral sensorimotor cortex | perception | -0.094 | 0.00976 |
posterior cingulate cortex | action | 0.092 | <0.0001 |
dorsolateral prefrontal cortex | external context | -0.092 | <0.0001 |
Thus, we contrasted the characteristic trajectories derived from the fcHNN models of the two groups (initialized with the group-level functional connectomes). Our fcHNN-based flow analysis predicted that in ASD, there is an increased likelihood of states returning towards the middle (more noisy states) from the internal-external axis and an increased likelihood of states transitioning towards the extremes of the action-perception axis (Figure 5C). We observed a highly similar pattern in the real data (Pearson’s correlation: 0.66), statistically significant after permutation testing (shuffling the group assignment, p=0.009).
Discussion¶
In this study, we have introduced and validated a simple and robust network-level generative computational framework that elucidates how activity propagation within the functional connectome orchestrates large-scale brain dynamics, leading to the spontaneous emergence of brain states, smooth gradients among them, and characteristic dynamic responses to perturbations.
The construct validity of our model is rooted in the activity flow principle, first introduced by Cole et al. (2016). The activity flow principle states that activity in a brain region can be predicted by a weighted combination of the activity of all other regions, where the weights are set to the functional connectivity of those regions to the held-out region. This principle has been shown to hold across a wide range of experimental and clinical conditions Cole et al., 2016Ito et al., 2017Mill et al., 2022Hearne et al., 2021Chen et al., 2018. The proposed approach is based on the intuition that the repeated, iterative application of the activity flow equation exhibits close analogies with a type of recurrent artificial neural networks, known as Hopfield networks Hopfield, 1982.
Hopfield networks have been widely acknowledged for their relevance for brain function, including the ability to store and recall memories Hopfield, 1982, self-repair Murre et al., 2003, a staggering robustness to noisy or corrupted inputs Hertz et al., 1991 (see also Supplementary Figure 8) and the ability to produce multistable dynamics organized by the “gravitational pull” of a finite number of attractor states Khona & Fiete, 2022. While many of such properties of Hopfield networks have previously been proposed as a model for micro-scale neural systems (see Khona & Fiete (2022) for a review), the proposed link between macro-scale activity propagation and Hopfield networks allows transferring the vast body of knowledge on Hopfield networks to the study of large-scale brain dynamics.
Integrating Cole’s activity flow principle with the HNN architecture mandates the initiation of network weights with functional connectivity values, specifically partial correlations as suggested by Cole et al. (2016). Considering the functional connectome as weights of a neural network distinguishes our methodology from conventional biophysical and phenomenological computational modeling strategies, which usually rely on the structural connectome to model polysynaptic connectivity Cabral et al., 2017Deco et al., 2012Golos et al., 2015Hansen et al., 2015. Given the challenges of accurately modelling the structure-function coupling in the brain Seguin et al., 2023, such models are currently limited in terms of reconstruction accuracy, hindering translational applications. By working with direct, functional MRI-based activity flow estimates, fcHNNs bypass the challenge of modelling the structural-functional coupling and are able to provide a more accurate representation of the brain’s dynamic activity propagation (although at the cost of losing the ability to provide biophysical details on the underlying mechanisms). Another advantage of the proposed model is its simplicity. While many conventional computational models rely on the optimization of a high number of free parameters, the basic form of the fcHNN approach comprises solely two, easily interpretable “hyperparameters” (temperature and noise) and yields notably consistent outcomes across an extensive range of these parameters (Supplementary Figure 1, 3, 5, 6, 7). To underscore the potency of this simplicity and stability, in the present work, we avoided any unnecessary parameter optimization, leaving a negligible chance of overfitting. It is likely, however, that extensive parameter optimization could further improve the accuracy of the model.
Further, the fcHNN approach allows us to leverage on knowledge about the underlying ANN architecture. Specifically, Hopfield attractor dynamics provide a mechanistic account for the emergence of large-scale canonical brain networks (Zalesky et al., 2014) ), and shed light to the origin of characteristic task-responses that are accounted by “ghost attractors” in the system Deco & Jirsa, 2012Vohryzek et al., 2020. As fcHNNs do not need to be trained to solve any explicit tasks, they are well suited to examine spontaneous brain dynamics. However, it is worth mentioning that fcHNNs are compatible with the neuroconnectionist approach (Doerig et al., 2021), as well. Like any other ANNs, fcHNNs can also be further trained via established ANN training techniques (e.g. via the Hebbian learning rule) to “solve” various tasks or to match developmental dynamics or pathological alterations. In this promising future direction, the training procedure itself becomes part of the model, providing testable hypotheses about the formation, and various malformations, of brain dynamics.
Given its simplicity, it is noteworthy, how well the fcHNN model is able to reconstruct and predict brain dynamics under a wide range of conditions. First and foremost, we have found that the topology of the functional connectome seems to be well suited to function as an attractor network, as it converges much faster than permuted null models. Second, we found that the two-dimensional fcHNN projection can explain more variance in real resting state fMRI data than the first two principal components derived from the data itself. This may indicate that through the known noise tolerance of the underlying ANN architecture, fcHNNs are able to capture essential principles of the underlying dynamic processes even if our empirical measurements are corrupted by noise and low sampling rate. Indeed, fcHNN attractor states were found to be robust to noisy weights (Supplementary Figure 8) and highly replicable across datasets acquired at different sites, with different scanners and imaging sequences (study 2 and 3). The observed high level of replicability allowed us to re-use the fcHNN model constructed with the connectome of study 1 for all subsequent analyses, without any further fine-tuning or study-specific parameter optimization.
Conceptually, the notion of a global attractor model of the brain network is not new Freeman, 1987Deco & Jirsa, 2012Vohryzek et al., 2020Deco et al., 2012Golos et al., 2015Hansen et al., 2015. The present work shows, however, that the brain as an attractor network necessarily ‘leaks its internal weights’ in form of the partial correlation across the regional timeseries. This indicates that, partial correlations across neural timeseries data from different regions (i.e. functional connectivity) may be a more straightforward entry point to investigating the brain’s attractor dynamics, than estimates of structural connectedness. Thereby, the fcHNN approach provides a simple and interpretable way to infer and investigate the attractor states of the brain, without the need for additional assumptions about the underlying biophysical details. This is a significant advantage, as the functional connectome can be easily and non-invasively acquired in humans, while biophysical details required by other models are hard to measure or estimate accurately.
Furthermore, here we complement previous work on large-scale brain attractor dynamics, by demonstrating that the reconstructed attractor states are not solely local minima in the state-space but act as a driving force for the dynamic trajectories of brain activity. We argue that attractor dynamics may be the main driving factor for the spatial and temporal autocorrelation structure of the brain, recently described to be predictive of network topology in relation to age, subclinical symptoms of dementia, and pharmacological manipulations with serotonergic drugs Shinn et al., 2023. Nevertheless, attractor states should not be confused with the conventional notion of brain states Chen et al., 2015 and large-scale functional gradients Margulies et al., 2016. In the fcHNN framework, attractor states can rather be conceptualized as “Platonic idealizations” of brain activity, that are continuously approximated - but never reached - by the brain, resulting in re-occurring patterns (brain states) and smooth gradual transitions (large-scale gradients).
Relying on previous work, we can establish a relatively straightforward (although somewhat speculative) correspondence between attractor states and brain function, mapping brain activation on the axes of internal vs. external context Golland et al., 2008Cioli et al., 2014, as well as perception vs. action Fuster, 2004. This four-attractor architecture exhibits an appealing analogy with Friston’s free energy principle Friston et al., 2006 that postulates the necessary existence of brain subsystems for active and perceptual inference and proposes that the dynamical dependencies that drive the flow of information in the brain can be represented with a hierarchically nested structure (e.g. external and internal subsystem) that may be an essential ingredient of conscious Ramstead et al., 2023 and autonomous Lee et al., 2023 agents.
Both conceptually and in terms of analysis practices, resting and task states are often treated as separate phenomena. However, in the fcHNN framework, the differentiation between task and resting states is considered an artificial dichotomy. Task-based brain activity in the fcHNN framework is not a mere response to external stimuli in certain brain locations but a perturbation of the brain’s characteristic dynamic trajectories, with increased preference for certain locations on the energy landscape (“ghost attractors”). In our analyses, the fcHNN approach captured and predicted participant-level activity changes induced by pain and its self-regulation and gave a mechanistic account for how relatively small activity changes in a single region (NAcc) may result in a significantly altered pain experience. Our control-signal analysis is different from, but compatible with, linear network control theory-based approaches Liu et al., 2011Gu et al., 2015. Combining network control theory with the fcHNN approach could provide a powerful framework for understanding the effects of various tasks, conditions and interventions (e.g. brain stimulation) on brain dynamics.
Brain dynamics can not only be perturbed by task or other types of experimental or naturalistic interventions, but also by pathological alterations. Here we provide an initial demonstration (study 7) of how fcHNN-based analyses can characterize and predict altered brain dynamics in autism spectrum disorder (ASD). The observed ASD-associated changes in brain dynamics are indicative of a reduced ability to flexibly switch between perception and internal representations, corroborating previous findings that in ASD, sensory-driven connectivity transitions do not converge to transmodal areas Hong et al., 2019. Such findings are in line with previous reports of a reduced influence of context on the interpretation of incoming sensory information in ASD (e.g. the violation of Weber’s law) Hadad & Schwartz, 2019.
Our findings open up a series of exciting opportunities for the better understanding of brain function in health and disease. First, the 2-dimensional fcHNN projection offers a simple framework not only for the visualization, but also for the interpretation, of brain activity patterns, as it conceptualizes changes related to various behavioral or clinical states or traits as a shift in brain dynamics in relation to brain attractor states. Second, fcHNN analyses may provide insights into the causes of changes in brain dynamics, by for instance, identifying the regions or connections that act as an “Achilles heel” in generating such changes. Such control analyses could, for instance, aid the differentiation of primary causes and secondary effects of activity or connectivity changes in various clinical conditions. Third, the fcHNN approach can provide testable predictions about the effects of pharmacological interventions as well as non-invasive brain stimulation (e.g. transcranial magnetic or direct current stimulation, focused ultrasound, etc.) and neurofeedback. Obtaining the optimal stimulation or treatment target within the fcHNN framework (e.g. by means of network control theory Liu et al., 2011) is one of the most promising future directions with the potential to significantly advance the development of novel, personalized treatment approaches.
The proposed approach is not without limitations. First, the fcHNN model is obviously a simplification of the brain’s dynamics, and it does not aim to explain (i) the brain’s ability to perform certain computations, (ii) brain regions’ ability to perform certain functions or (iii) biophysical details underlying (altered) polysynaptic connections. Nevertheless, our approach showcases that many characteristics of brain dynamics, like multistability, temporal autocorrelations, states and gradients, can be explained, and predicted, by a very simple nonlinear phenomenological model. Second, our model assumes a stationary connectome, which seems to contradict notions of dynamic connectivity. However, with realistically changing control signals, our model can easily reconstruct dynamic connectivity changes, which still stem from an underlying, stationary functional connectivity architecture. This is in line with the notion of “latent functional connectivity”; an intrinsic brain network architecture built up from connectivity properties that are persistent across brain states McCormick et al., 2022.
In this initial work, we presented the simplest possible implementation of the fcHNN concept. It is clear that the presented analyses exploit only a small proportion of the richness of the full state-space dynamics reconstructed by the fcHNN model. There are many potential ways to further improve the utility of the fcHNN approach. Increasing the number of reconstructed attractor states (by increasing the temperature parameter), investigating higher-dimensional dynamics, fine-tuning the hyperparameters, testing the effect of different initializations and perturbations are all important direction for future work, with the potential to further improve the model’s accuracy and usefulness.
Conclusion¶
Here we have proposed a lightweight, high-level computational framework that accurately captures and predicts brain dynamics under a wide range of conditions, including resting states, task-induced activity changes and brain disorders. The framework models large-scale activity flow in the brain with a recurrent artificial neural network architecture that, instead of being trained to solve specific tasks or mimic certain dynamics, is simply initialized with the empirical functional connectome. The framework identifies neurobiologically meaningful attractor states and provides a model for how these restrict brain dynamics. The proposed model establishes a conceptual link between connectivity and activity, provides a mechanistic account for the emergence of brain states, gradients and temporal autocorrelation structure and offers a simple, robust, and highly interpretable computational alternative to conventional descriptive approaches to investigating brain function. The generative nature of our proposed model opens up a wealth of opportunities for future research, including predicting the effect, and understanding the mechanistic bases, of various interventions; thereby paving the way for designing novel treatment approaches.
Acknowledgements¶
The work was supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation; projects ‘TRR289 - Treatment Expectation’, ID 422744262 and ‘SFB1280 - Extinction Learning’, ID 316803389) and by IBS-R015-D1 (Institute for Basic Science; C.W.-W.).
Analysis source code¶
https://
Project website¶
https://
Data availability¶
Study 1, 2 and 4 is available at openneuro.org (ds002608, ds002608, ds000140). Data for study 3 is available upon request. Data for study 5-6 is available at the github page of the project: https://
- Buzsaki, G. (2006). Rhythms of the Brain. Oxford university press.
- Bassett, D. S., & Sporns, O. (2017). Network neuroscience. Nature Neuroscience, 20(3), 353–364.
- Liu, X., & Duyn, J. H. (2013). Time-varying functional network information extracted from brief instances of spontaneous brain activity. Proceedings of the National Academy of Sciences, 110(11), 4392–4397.
- Zalesky, A., Fornito, A., Cocchi, L., Gollo, L. L., & Breakspear, M. (2014). Time-resolved resting-state brain networks. Proceedings of the National Academy of Sciences, 111(28), 10341–10346.
- Margulies, D. S., Ghosh, S. S., Goulas, A., Falkiewicz, M., Huntenburg, J. M., Langs, G., Bezgin, G., Eickhoff, S. B., Castellanos, F. X., Petrides, M., & others. (2016). Situating the default-mode network along a principal gradient of macroscale cortical organization. Proceedings of the National Academy of Sciences, 113(44), 12574–12579.