Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

Spatial Embedding Imposes Constraints on the Network Architectures

of Neural Systems
Jennifer Stiso1,2 and Danielle S. Bassett1,3,4,5,6
1
Department of Bioengineering, School of Engineering and Applied Sciences,
University of Pennsylvania, Philadelphia, PA, 19104
5
Department of Physics & Astronomy, College of Arts and Sciences,
University of Pennsylvania, Philadelphia, PA, 19104
3
Department of Electrical and Systems Engineering, School of Engineering and Applied Sciences,
University of Pennsylvania, Philadelphia, PA, 19104
4
Department of Neurology, Perelman School of Medicine,
University of Pennsylvania, Philadelphia, PA, 19104
2
Neuroscience Graduate Group, Perelman School of Medicine,
University of Pennsylvania, Philadelphia, PA, 19104 and
6
arXiv:1807.04691v1 [q-bio.NC] 12 Jul 2018

To whom correspondence should be addressed: dsb@seas.upenn.edu

A fundamental understanding of the network architecture of the brain is necessary for the further
development of theories explicating circuit function. Recent progress has capitalized on quantitative
tools from network science to parsimoniously describe and predict neural activity and connectiv-
ity across multiple spatial and temporal scales. Perhaps as a historical artifact of its origins in
mathematics or perhaps as a derivative of its initial application to abstract informational systems,
network science provides many methods and summary statistics that address the networks topologi-
cal characteristics with little or no thought to its physical instantiation. Yet, for embedded systems,
physical laws can directly constrain processes of network growth, development, and function, and
an appreciation of those physical laws is therefore critical to an understanding of the system. Re-
cent evidence demonstrates that the constraints imposed by the physical shape and volume of the
brain, and by the mechanical forces at play in its development, have marked effects on the observed
network topology and function. Here, we review the rules imposed by space on the development
of neural networks and show that these rules give rise to a specific set of complex topologies. We
present evidence that these fundamental wiring rules affect the repertoire of neural dynamics that
can emerge from the system, and thereby inform our understanding of network dysfunction in dis-
ease. We also discuss several computational tools, mathematical models, and algorithms that have
proven useful in delineating the effects of spatial embedding on a given networked system and are
important considerations for addressing future problems in network neuroscience. Finally, we out-
line several open questions regarding the network architectures that support circuit function, the
answers to which will require a thorough and honest appraisal of the role of physical space in brain
network anatomy and physiology.

NETWORK TOPOLOGY VERSUS GEOMETRY by studies of regional activation (Box 1).


IN NEURAL SYSTEMS
In particular, several fundamental questions in neuro-
science are quintessentially network questions concern-
In contemporary neuroscience, increasing volumes of ing the physical relationships between functional units.
data are being brought to bear on the question of how How does the physical structure of a circuit affect its
heterogeneous and distributed interactions between neu- function? How does coordinated activity at small spa-
ral units might give rise to complex behaviors. Such tial scales give rise to emergent phenomena at large spa-
interactions form characteristic patterns across multiple tial scales? How might alterations in neurodevelopmental
spatial scales, spanning from the relatively small scales of processes lead to circuit malfunction in psychiatric disor-
molecules and cells, to the relatively large scales of areas ders? How might pathology spread through cortical and
and lobes [1]. A natural language in which to describe subcortical tissue giving rise to the well-known clinical
such interactions is network science, which elegantly rep- presentations of neurological disease? These questions
resents interconnected systems as sets of nodes linked by collectively highlight the fact that the brain — and its
edges [2]. Intuitively, nodes often represent proteins, neu- multiple networks of interacting units — is physically em-
rons, subcortical nuclei, or large cortical areas, and edges bedded into a fixed three-dimensional enclosure. Natural
often represent either (i) structural links in the form of consequences of this embedding include diverse physical
chemical bonds, synapses, or white-matter tracts, or (ii) drivers of early connection formation and physical con-
functional links in the form of statistical relations be- straints on the resultant adult network architecture. An
tween nodal activity time series. Generally, the resultant understanding of the system’s constitution and basal dy-
network architecture can be fruitfully studied using tools namics therefore require not only approaches to quantify
from graph theory to obtain mechanistic insights perti- and predict network topology, but also tools, theories,
nent to cognition [3, 4], above and beyond those provided and methods to quantify and predict network geometry
2

and its role in both enabling and constraining system is costly, utilizing 20% of the body’s energy, despite com-
function. prising only 2% of its volume [12, 13]. Even the devel-
In this review, we provide evidence to support the no- opment of axons alone, comprising only a small portion
tion that a consideration of the brain’s physical embed- of cortical tissue, extorts a large material cost [7]. The
ding will prove critical for a holistic understanding of neu- existence of these pervasive costs motivated early work
ral circuit function. We focus our comments on the utility to postulate that wiring minimization is a fundamental
of informing this consideration with emerging computa- driver of connection formation. Consistent with this hy-
tional tools developed for the characterization of spatial pothesis, the connectomes of multiple species are pre-
networks. Indeed, while network science was originally dominantly comprised of wires extending over markedly
developed in the context of systems devoid of clear spatial short distances [13–19], and this observation holds across
characteristics [5], the field has steadily developed tools different methods of data collection [16, 18, 19].
and intuitions for spatially embedded network systems However, mounting evidence suggests that pressures
[6]. In the light of these recent technical developments, for wiring minimization may compete against pressures
we begin by recounting observations from empirical stud- for efficient communication [20–22]. Early evidence sup-
ies addressing the question of how brain networks are porting the role of efficient communication came from the
embedded into physical space. Next, we discuss the rele- observation that one can fix the network architecture of
vance of this spatial embedding for an understanding of inter-areal projections in the macaque cortex and then
network function and dysfunction. We complement these rearrange the location of areas in space to obtain a con-
empirical discussions with a more technical exposition on figuration with significantly (32%) lower wiring cost than
the relevant tools, methods, and statistical approaches to that present in the real system [21]. A similar method
be considered when analyzing brain networks. Lastly, we can be used to obtain a configuration of the C. elegans
close by outlining a few important future directions in neuronal connectome with 48% lower wiring cost than
methodological development and neuroscientific investi- that present in the real system [21]. Interestingly, the
gation that would benefit from an honest appraisal of the connections whose length is decreased most also tend to
role of space in neural network architecture and dynam- be those that shorten the characteristic path length –
ics. one of many ways to quantify how efficiently a network
can communicate [21, 23]. Consistent observations have
been made in human white matter tractography [20], the
PHYSICAL CONSTRAINTS ON NETWORK mouse inter-areal connectome [10], and dendritic arbors
TOPOLOGY AND GEOMETRY [7, 14]. Notably, computational models that instantiate
both constraints on wiring and efficient communication
The processes and influences that guide the formation produce topologies more similar to the true topologies
of structural connections in neural systems are disparate than models that instantiate a constraint on wiring min-
and varied [7, 8]. Evidence from genetics suggests that imization alone [11, 24]. Moreover, models that allow for
neurons with similar functions as operationalized by sim- changes in this tradeoff over developmental time peri-
ilar gene expression tend to have more similar connection ods better fit observed connectome growth patterns than
profiles than neurons with less similar functions [7, 9, 10]. prior models [25].
Of course, it is important to note that some spatial sim- It is precisely this balance between wiring minimiza-
ilarity of expression profiles is expected due to the in- tion and communication efficiency that is thought to pro-
fluence of small scale spatial gradients in growth factors duce the complex network topologies observed in neural
over periods of development [7]. However, evidence from systems, along with markedly precise spatial embedding
the Allen Brain Atlas suggests that interareal connec- [24, 26]. To better understand this spatially embedded
tivity profiles in rodent brains are even more correlated topology, it is useful to consider methods that can si-
with gene co-expression than expected simply based on multaneously (rather than independently) assess topol-
such spatial relationships [9]. This heightened correlation ogy and geometry. One such method that has proven
could be partially explained by observations in mathe- particularly useful in the study of neural systems from
matical modeling studies that neurons with similar in- mice to humans is Rentian scaling, which assesses the
puts (and therefore potentially performing similar func- efficiency of a network’s spatial embedding [20, 27–30].
tions) tend to have more similar connection profiles than Originally developed in the context of computer circuits,
neurons with dissimilar inputs [11]. Rentian scaling describes a power-law scaling relation-
Yet, while genetic coding and functional utility each ship between the number of nodes in a volume and the
play important roles, a key challenge lies in summariz- number of connections crossing the boundary of the vol-
ing the various constraints on connection formation in ume [7, 20]. The existence of such a power law relation-
a simple and intuitive theory that can guide future pre- ship with an exponent known as Rent’s exponent is con-
dictions. One particularly acclaimed candidate mecha- sistent with an efficient spatial embedding of a complex
nism for such a theory is that of physical constraints on topology [31, 32]. In turn that efficient spatial embedding
the development, maintenance, and use of connections. is thought to support a broad repertoire of functional
Metabolism related to neural architecture and function dynamics. For example, tracts that bridge disparate ar-
3

eas of cortex to increase communication efficiency despite networks [45]. It will be interesting in future to gain
greater wiring cost, also critically add to the functional a deeper understanding of the relations between cycles
diversity of the brain in a manner that is distinct from and modules, and their emergence through the spatially
that predicted by path length alone [33]. constrained processes of development.
At the global scale, a key conserved topological feature
is small-worldness, or the confluence of unexpectedly high
REFLECTIONS OF PHYSICAL CONSTRAINTS clustering and short path length (Box 1) [46]. Such an
IN LOCAL, MESOSCALE, AND GLOBAL architecture is thought to be particularly conducive to a
NETWORK TOPOLOGY balance between local information processing within the
clusters, and global information transmission across the
Across species, the brain consistently exhibits a set topologically long distance connections [47]. Similar to
of topological features at local, meso-, and global scales the existence of hubs, modules, and cavities, small-world
that can be relatively simply explained by spatial wiring architecture in a network can naturally arise from spa-
rules [34–36]. At the local scale, multiple modalities have tial constraints on wiring [48]. Intuitively, clusters tend
been used to demonstrate that a key conserved topologi- to form in spatially nearby regions in order to minimize
cal feature is the existence of hubs, or nodes of unexpect- wiring cost, while long distance connections facilitating
edly high degree [37, 38]. Such hubs emerge naturally efficient communication tend to form only occasionally
in computational models in which the location of nodes due to their elevated wiring cost [49]. In concert with
are fixed in space, and edges between nodes are rewired these empirical observations, computational models that
to miminize average wiring length and to maximize topo- account for wiring economy produce networks with small-
logical efficiency by minimizing the average shortest path world architecture reminiscent of that observed in real
length (Box 1). Interestingly, the number and degree of neural systems [50]. Collectively, these studies demon-
hubs varies systematically with the relative importance of strate the influence of parsimonious wiring rules on com-
the two constraints [14, 24] (Fig. 1). When wiring min- plex network topology. Future work could be directed to
imization is not enforced, networks become star graphs better understand the aspects of connectome topology
with a single giant hub [14, 24]; when both constraints that remain unexplained and thus may arise from more
are balanced, networks contain several hubs of varying subtle rules [8].
degrees, consistent with the topology observed in brain
networks [24]. It is notable that such constraints can
be implemented within the natural processes of devel- RELEVANCE OF NETWORK GEOMETRY FOR
opment; for example, in adult C. elegans, hub neurons DYNAMICS AND COGNITION
have been tracked back to the earliest born neurons in
the embryo, which accumulate a large number of connec- Pressures for wiring minimization and communica-
tions along the normative growth trajectory [26, 39]. tion efficiency can exist alongside developmental pro-
At the mesoscale, a key conserved topological feature is cesses that produce non-isotropically structured organs.
modularity, or the existence of internally dense and exter- Such processes include migration, elongation, segrega-
nally sparse communities of nodes [34, 40]. The strength tion, folding, and closure that accompany neurulation
of modularity in a network is commonly quantified using resulting in the bilaterally symmetric nervous system
a modularity quality index (Box 1). In computational composed of proencephalon, mesencephalon, rhomben-
models, this index obtained under pressures of wiring cephalon, and spinal cord as well as patterning across
minimization and communication efficiency (quantified multiple overlapping signaling gradients. It is intuitively
with path length) was more similar to that empirically possible that such processes could also explain the ob-
measured in the connectomes of the macaque and C. el- served differences in the network topologies of different
egans than to that obtained under either constraint sep- sectors of the brain [51, 52], which can impinge on the
arately [8, 24]. Again it is notable that such constraints functions that those sectors are optimized to perform
can be implemented within the natural processes of de- (Box 3). Indeed, prior work has noted the co-existence
velopment; for example, in C. elegans, communities form of complex structural topologies and spatial gradients of
when many neurons are born in a similar temporal win- specific function [53], although it has been difficult to
dow, and therefore typically share a common progenitor achieve a mechanistic understanding of exactly how the
type, spatial location, and genetic profile [26, 41]. Spaces two relate to one another. One particularly promising
between modules can form cavities or cycles, or intu- recent line of investigation has proposed the existence of
itively holes in the network, that can be identified with a set of primary spatial gradients that explain variance
emerging tools from applied algebraic topology (Box 2) in large-scale connectivity [54, 55]. In both humans and
[42]. The locations, prevalence, and weight structure of macaques, the primary axis of variance is bounded on
these cycles differs markedly between geometric and ran- one end by the transmodal default mode system, and on
dom networks [43, 44], with patterns of functional con- the other end by the unimodal sensory systems [55]. No-
nectivity among neurons exhibiting characteristics simi- tably, this gradient is tightly linked to the geometry of
lar to those observed in spatially constrained geometric the network, with the regions located at one end having
4

β=0 β = .8 β=1
B

Relative Temporal Cost (%)


Relative Spatial Cost (%)

Trade-off Parameter (β)

FIG. 1. Effect of wiring minimization and communication efficiency on network topology. Networks were generated
by modulating the balance between a constraint on wiring (in the figure referred to as a spatial cost) and a constraint on
information routing efficiency (in the figure referred to as a temporal cost). The parameter β, which ranges between 0 and 1,
tunes this balance by weighting spatial cost against temporal cost. When β = 0 only the spatial cost is considered, while when
β = 1 only the temporal cost is considered. (A) Examples of networks at different values of β when only the spatial constraint
exists (left), when only the temporal constraint exists (right), and when the two constraints are balanced (middle). Root nodes
are shown in green and all other nodes are shown in yellow. (B) Spatial costs (blue) and temporal costs (red) vary as a function
of β. This figure was adapted with permission from [14].

maximal spatial distances from the regions located at the sistent architecture of correlations between regional time
other end [55]. Additionally, the regions located at the series [59], the manner in which the across-region activity
peaks of the transmodal gradient have substantial overlap pattern at one time point is related to the across-region
with structural hubs in human connectomes [4, 56]. As activity pattern at other time points is not well under-
discussed earlier, these hub could serve to optimally bal- stood. Recent work addressing this gap has posited the
ance wiring cost and communication efficiency across the existence of so-called lag threads, or spatial similarities
connectome, in addition to explaining patterns of func- between whole-brain activity patterns at non-zero time
tional connectivity. Put simply, such evidence supports lags [60]. Unexplained by vasculature, the orthogonal
the notion that the cortex is fundamentally organized threads are thought to reflect slow, subthreshold changes
along a dimension of function from concrete to abstract, in the membrane potential of large neuronal populations
and that dimension manifests clearly in the network’s [61], a notion that is supported by subtle changes in
spatial embedding. lag thread structure across sleep and during cognitively
demanding tasks [60, 62]. Notably, regions of the de-
The diverse spatial location of cortical and subcortical fault mode (which coincide with peaks of the transmodal
areas has important implications for the patterns of neu- gradient discussed above) participate in lag thread mo-
ral dynamics that one would expect to observe. Consider, tifs, where changes in the activity in one region lead to
for example, the patterns of intrinsic activity noted con- changes in the activity of another region more frequently
sistently across species, individuals, and imaging modal- than expected in an appropriate statistical null model
ities [55, 57–59]. While prior work has reported a con-
5

[58, 59]. The existence of these dependencies is consistent implicated in epilepsy, schizophrenia is also a condition
with time-invariant, possibly structural features of brain marked by severe network disturbances that have broad
organization producing highly reproducible patterns of ramifications for cognitive function [22, 85, 86]. Some
activity. of these network alterations appear to selectively affect
connections of certain physical lengths, reflecting an al-
teration in the network’s spatial embedding [87]. Specif-
RELEVANCE OF NETWORK GEOMETRY FOR ically, evidence suggests a reduced hierarchical struc-
DISEASE ture and increased connection distance in the anatom-
ical connectivity of multimodal cortex in patients with
schizophrenia compared to healthy controls, indicative of
The spatial architecture of brain networks not only
less efficient spatial wiring [85]. Moreover, in functional
impacts our understanding of dynamics and cognition,
brain networks, patients display longer high-weight con-
but also our understanding of neurological disease and
nections, decreased clustering, and increased topological
psychiatric disorders. Mounting evidence suggests that
efficiency in comparison to healthy controls [87]. The
many diseases and disorders of mental health can be
lack of strong, short distance functional connections is
thought of fruitfully as network disorders, where the
in line with evidence from animal studies suggesting an
anatomy and physiology of cross-regional communication
over-pruning of synapses in childhood onset schizophre-
can go awry [63, 64]. Intuitively, spatial anisotropies of
nia [87]. Here, the intuitions gained from a consideration
developmental processes, spatial specificity of pathology,
of the network’s spatial embedding offer important di-
and spatial inhomogeneities of drug targets that either
rections for future work in linking non-invasive imaging
lead to or accompany these disorders can also explain al-
phenotypes with invasive biomarkers of neural dysfunc-
terations in the spatial characteristics of brain networks
tion in disease.
[65]. In this section we briefly discuss this correspondence
in epilepsy, a particularly common neurological disease,
and in schizophrenia, a particularly devastating psychi-
atric disorder. STATISTICS, NULL MODELS, AND
Despite a diverse pathophysiology and a renitent uni- GENERATIVE MODELS
fying biological mechanism, epilepsy is characterized by
altered network dynamics in the form of seizures that In the previous sections, we outlined developmental
display spatially consistent patterns. For example, an rules for efficient wiring and we discussed the reflections
ictal period often begins with a marked spatial decor- of these rules in spatial patterns of healthy and diseased
relation between distributed brain regions followed by a brain dynamics. Collectively, the studies that we
period in which abnormally synchronized activity prop- have reviewed motivate the broader use and further
agates in consistent spatial patterns [66, 67]. In addi- development of sophisticated and easily-implementable
tion to broad patterns of spatial decorrelation, individ- tools for the analysis of a network’s spatial embedding
ual siezures also show stereotyped patterns of both spi- [88]. Here we outline the current state of the field in
ral waves and travelling waves of activity [66, 68–73]. In developing effective network statistics, network null
silico studies have demonstrated that a simple adaptive models, and generative network models that account for
model of synaptically coupled and spatially embedded spatial embedding.
excitatory neurons can reproduce many basic features of
these waveforms, including their speed and the size of Network Statistics. A simple way to examine net-
the wavefront [70]. Yet, it is worth noting that travel- work architecture in the context of spatial embedding
ling waves are not unique to epilepsy, but also occur in is to incorporate the Euclidean distance of connections
healthy human and non-human primates where they are into local, meso-scale, and global statistics [87, 89]. Ar-
thought to play a role in transporting task-relevant infor- guably the simplest local statistics that remain spatially
mation [71, 74–79]. However, marked differences in wave sensitive are moments of the distribution of edge lengths
propagation in healthy and epileptic cortical tissue sug- in the network, including the mean, variance, skewness,
gests that the precise spatial progression is important, and kurtosis. One can also compute graph metrics that
potentially supported by distinct underlying microstruc- have been extended to consider space, such as the physi-
tures [80]. Finally, even interictal dynamics are altered cal network efficiency and the physical edge betweenness
in epilepsy, as manifest by marked decreases in average [90]. Intuitively, both begin by defining the length of the
functional connectivity across the brain combined with shortest physical (as opposed to topological) path along
local increases in functional connectivity and efficiency network edges between any two nodes. The physical net-
in default mode areas [81–83]. These connectivity pat- work efficiency then takes the inverse of the harmonic
terns have some utility in predicting seizure spread, but mean of this length, while the physical edge betweenness
the guiding principles leading to these changes and how provides the fraction of shortest physical paths between
they relate to fine scale patterns of activity remains un- all node pairs that traverse a given edge [91]. One could
clear [84]. also define a physical clustering coefficient in a similar
While its pathophysiology is quite distinct from that manner. Finally, one can assess the system for Rentian
6

FIG. 2. Spatial distribution of intrinsic neural activity. Principal gradients of functional connectivity calculated in the
structural connections of both humans and macaques. The first two principal gradients explained approximately 40% of the
observed variance. (A) (Left) A scatter plot of the first two principal gradients, with transmodal regions shown in red, visual
regions shown in blue, and sensorimotor regions shown in green. (Right) The same colors are used to show the distribution of
points visualized on a cortical surface. The pattern suggests the existence of a macroscale gradient of connectivity that reflects
the systematic integration of information across different sensory modalities. (B, Left) The minimum geodesic distance (mm)
between each point on the cortical surface and the positive peaks of the first principal gradient. The peaks are shown as white
circles. (B, Right) A scatter plot depicting the relationship between distance and location on the trans- to uni-modal gradient.
Put differently, transmodal regions with high values in the principal gradient are maximally distant from unimodal regions with
low values in the principal gradient. This figure was adapted with permission from [55].

scaling as described earlier, providing information on how physical space of the network’s embedding. First,
efficiently the complex network topology has been embed- one can directly incorporate the wiring minimization
ded into the physical space [20, 27–30]. In the context constraint observed in brain networks by defining a null
of neural systems, these spatially informed graph statis- model with a probability of connection between two
tics can be used to account for the physical nature of nodes that decays exponentially as a function of distance
information processing, propagation, and transmission. [99]. Using this model, one can detect different, and
Complementing local and global graph statistics is more spatially distributed modules than those obtained
an assessment of a network’s community structure, a when one uses the configuration model [99]. Second, one
mesoscale property frequently assessed by considering can employ gravity models [100], which account for the
the existence and strength of network modules [92]. number of connections expected given a certain distance
From that community structure, one can determine the (typically a power law or inverse of distance), weighted
spatial embedding of communities, for example by as- by the relative importance of each location (typically
sessing their laterality in bilaterally symmetric systems a quantification of the population or size of a given
such as the brain [93–95]. One of the most com- location) [100, 101]. Third, one can employ radiation
mon ways to assess community structure is to maxi- models designed to capture flow of information between
mize a modularity quality function, which identifies as- regions, by weighting distance functions by the flux
sortative modules with dense within-module connectivity or flow of each location [101]. Of course, there exists
and sparse between-module connectivity [96] (although no single correct null model for community detection
see [97] for methods to identify non-assortative commu- that will suit every question in neuroscience. However,
nities). Mechanistically, this algorithm compares the we propose that many studies could test tighter, more
strength of observed connections between two nodes in a targeted hypotheses about community structure in brain
community to that expected under a given a null model. networks by using a null model that accounts for the
The most commonly used null model in this context is brain’s spatial nature.
the Newman-Girvan or configuration model, which pre-
serves the strength distribution of the network [96]. How- Network Null Models. When considering a network
ever, this null model operationalizes a purely topological representation of a neural system, one often computes
constraint – the strength distribution – and does not ac- a statistical quantitity of interest and then compares
knowledge any spatial constraints that may exist in the that quantity to that expected in a random network null
system. For this reason, many investigators across sci- model. If the observed quantity is significantly greater
entific domains have begun developing alternative null than or less than that expected, one concludes that the
models that account for physical laws [98] or physical network under study shows meaningful architecture of
contraints [99–101] on their system of interest. potential relevance to the biology. Perhaps the most com-
In the context of brain networks, it is worth con- mon random network null model is that which randomly
sidering three distinct null models for modularity permutes the locations of edges in the network while pre-
maximization that incorporate information about the serving the number of nodes, number of edges, and edge
7

weight distribution. However, one may also be inter- ical network are compared to the statistics of each of
ested to determine whether observed statistics are dif- the generative models with the goal of inferring which
ferent from what one would expect simply from the spa- wiring rule was most likely to have produced the ob-
tial embedding or wiring rules of the network [102–104]. served architecture [11, 28, 111]. Evidence from such
To address these questions, one can rewire the observed studies suggests that spatially embedded models tend to
network by conditionally swapping two links if the swap more accurately reproduce network measures of large-
preserves the mean wiring length of the network [103]. scale neural systems than models that do not account
By pairing this model with a reduced null model in which for space [28]. One particularly influential study consid-
connections are only swapped if they reduce connnection ered 13 generative models that all incorporated a wiring
length, one can assess the role of long distance connec- probability that increased with distance [111]. Consis-
tions in the network, which will be preserved in the spa- tent with other work, the authors found that the model
tial null but not preserved in the reduced null [103]. In that only included the wiring minimization constraint
addition to preserving the mean wiring length, one might was unable to recreate long distance connections of in-
also wish to preserve the full edge length distribution dividual connectomes in humans [8, 11, 111]. Successive
by, for example, (1) fitting a function to the relation- generative models were then added that attempted to
ship between the mean and variance of edge weights and recreate certain aspects of topology in addition to these
their distances, (2) removing the effect of that relation- geometric constraints [111]. The models that performed
ship from the data, (3) randomly rewiring the network, the best were those that preserved homophilic attrac-
and (4) adding the effect back into the rewired network tion such that connections preferentially formed between
[102]. nodes that had similar connection profiles [111]. Contin-
To complement insights obtained from edge swapping ued advancement of generative network models, and in-
algorithms, one can also construct null model networks clusion of additional biological features such as bilateral
by stipulating a wiring rule a priori while fixing the symmetry, serves as an exciting approach to test mech-
locations of nodes within the embedded system. In anistic predictions about how network topology forms in
this vein, studies have fruitfully used null models based spatially embedded neural systems.
on minimum spanning tree and greedy triangulation
methods [105–107]. A minimum spanning tree is a graph
that connects all of the nodes in a network such that FUTURE DIRECTIONS
the sum of the total edge weights is minimal. To extend
this notion to spatial networks, one can preserve the The spatial embedding of the brain is an impor-
true geographic locations of all nodes in the empirical tant driver of its connectivity, which in turn directly
network and compute the minimum spanning tree on constrains neural function and by extension behavior.
the matrix of Euclidean distances between all node pairs Emerging tools from network science can be used to as-
[91]. Representing the opposite extreme is the greedy sess this spatial architecture, thereby allowing investiga-
triangulation model, which is particularly relevant for tors to test more specific hypotheses about brain network
the study of empirical networks that are planar (lying structure and dynamics. While we envision that the use
along a surface) as opposed to non-planar (lying within of these tools will significantly expand our understand-
a volume). In the context of neural systems, planar ing, it is also important to acknowledge their limitations.
or planar-like networks are observed in vasculature, In particular, the majority of currently available network
and in thinned models of cortex that either consider a tools make the simplifying assumption that all of the re-
single lamina or a coarse-grained model collapsing across lations of interests are strictly dyadic in nature, and ex-
laminae [108, 109]. To construct a greedy triangula- ist between inherently separable components [112]. In
tion null model, one can preserve the true geographic truth, however, features that arise from spatial embed-
locations of all nodes in the empirical network and ding can also manifest as continuous or overlapping maps
iteratively connect pairs of nodes in ascending order and gradients [53], motivating the use of tools from ap-
of their distance while ensuring that no edges cross. plied algebraic topology that can account for non-dyadic
After constructing such minimally and maximally wired interactions (Box 2). As the field moves forward, we
null models, one can calculate relative measures of envision existing and yet-to-be-developed tools for char-
wiring length, physical efficiency, physical betweenness acterizing the spatial embedding of brain networks will
centrality, and community structure by normalizing the prove critical for our understanding of network processes
empirical values by those expected in the two extremes underlying cognition, and alterations to those processes
[91]. accompanying disease.

Generative Network Models. Generative network


models can be used to test hypotheses about the rules
guiding network growth, development, and evolution
[110]. Often, an ensemble of generative models are con-
structed, and summary graph statistics from the empir-
8

FIG. 3. Community structure obtained with spatially embedded and non-embedded null models. (A) A schematic
of a spatially-informed null model. The model expects fewer long distance connections than short distance connections. (B)
A schematic of the anticipated difference between the spatial null model and the Newman-Girvan (NG) null model; spatial
communities will have longer distance connections and not capture clustering of spatially nearby regions. (C) Differences in
the association matrices between the two models. Positive (negative) numbers indicate when two nodes were more likely to
be co-assigned to the same module under the spatial (NG) model. (D) The difference in the participation coefficient between
the spatial and NG models. The participation coefficient quantifies how diverse a node’s connections are across modules. (E)
The difference in spatial spread of modules in both models; the spatially embedded model tends to produce modules that cover
larger distances. This figure was adapted with permission from [99].
9

BOX 1: SIMPLE NETWORK STATISTICS formalisms. Here, we describe an alternative approach to


studying structure in networks that relies on tools devel-
In a network representation of the brain, units rang- oped in the field of applied algebraic topology, specifically
ing from neurons or neuronal ensembles to nuclei and persistent homology [115–117]. Persistent homology can
areas are represented as network nodes and unit-to-unit be used to study intrinsically mesoscale structures called
interactions ranging from physical connections to statis- cycles and cliques [118]. Cliques are all-to-all connected
tical similarities in activity time series are represented as subsets of nodes in a network. The presence of many,
network edges. The architecture of the network can be large cliques indicates many highly connected units are
quantitatively characterized using statistics from graph present in the network [119]. Cycles are looped patterns
theory [113]. Here, we mathematically define some of the of cliques which may enclose a cavity, or topological void,
topological statistics mentioned elsewhere in this paper. within the network. Cliques and cavities by definition
reside within a binary graph, however one can expand a
• Degree and Strength. The degree of a node is the weighted network into a sequence of binary graphs via
number of connections it has. In a binary graph iterative thresholding [45, 120]. Then using persistent
encoded in the adjacency matrix A, where two re- homology one can track the birth, persistence, and death
gions i and j are connected if Aij = 1, and not of cavities along this sequence which gives a wholistic
connected
P if Aij = 0, then the degree ki is defined insight into the global network (Fig. Box 2, panel A).
as ki = i,j∈N Aij , where N is the set of all nodes. In random graphs, the number of births and deaths
In a weighted graph, where Aij is the strength of across thresholds follows a characteristic pattern [44]. At
the connection between nodes P i and j, then the high thresholds and low edge density, a few low dimen-
strength si is defined as si = i,j∈N Aij . sional cavities exist, while at low thresholds and high
• Path Length and Network Efficiency. The term edge density, more high-dimensional cavities exist (Fig.
path length frequently refers to the average length Box 2, panel B) [44, 121, 122]. Interestingly, geometric
of the shortest path in a network. The short- graphs – which can be used to instantiate spatial con-
est path between any two nodes is given by the straints on the topology – show a markedly different dis-
path requiring the fewest hops. The network ef- tribution. There are many low dimensional cavities, and
ficiency is given by the inverse of the harmonic fewer cavities with increasing dimension[43, 44, 121, 123]
mean of the shortest path length. To be pre- (Fig. Box 2, panel C). This general pattern has been
cise, we can write the path length of node i as recapitulated in functional networks constructed from
firing of hippocampal neurons, indicating a geometric
P
1 i∈N,j6=i di,j
P
Li = n i∈N n−1 , where di,j is the short-
rather than random nature to neuronal co-firing [45].
est path length between two nodes and n is the
Furthermore, the persistent homology of human connec-
number of nodes.
tomes [42] and rat microcircuits [124] is distinct from that
• Clustering Coefficient. The clustering coefficient expected in a minimally wired null model. In humans,
can be used to quantify the fraction of a node’s the presence of widespread subcortical connections leads
neighbors that are also neighbors with each other. to more cavities being born at high densities [42], while
Specifically, the clustering coefficient of node i is rat microcircuits display more high dimensional cavities
given by Ci = n1 i∈N ki (k2tii−1) , where ti is the
P
in general [124]. Further investigation into how wiring
number of triangles around node i [46]. The clus- rules shape the topology of neural systems may shed light
tering coefficient of the network is the average clus- on how the brain’s spatial embedding shapes connectivity
tering coefficient of all of its nodes. across scales and species.

• Modularity. While several modularity quality


P func-
tions exist, the most common is Q = ij [Aij − BOX 3: CONTROL THEORY
γPij ]δ(ci cj ), where Q is the modularity quality in-
dex, Pij is the expected number of connections be-
Network control theory provides a potentially power-
tween node i and node j under a specified null
ful approach for modeling neural dynamics [125]. Hail-
model, δ() is the Kroenecker delta, and ci indicates
ing from physics and engineering, network control theory
the community assignment of node i. The tuning
characterizes a complex system as composed of nodes
parameter γ ranges from (0, ∞) and can be used to
interconnected by edges, and then specifies a model of
tune the average community size.
network dynamics to determine how external input af-
fects the nodes’ time-varying activity [126]. Most studies
of network control in neural systems stipulate a linear,
BOX 2: APPLIED ALGEBRAIC TOPOLOGY time-invariant model of dynamics: ẋ(t) = Ax(t) + Bu(t),
where x is some measure of brain state, A is a struc-
While graph theory is a powerful and accessible frame- tural connectivity matrix, u is the input into the system
work for analyzing complex networks, complementary in- (exogenous stimulation, or endogenous input from other
formation can be gained by using different mathematical brain regions), and B selects the control set, or regions
10

Local Topology Path, distances Module detection,


(degree, clustering) centrality

FIG. 4. Schematic of network measures. (A) An illustration of network space (topology) and physical space (geometry).
The network is embedded into a physical space, indicated by the x- and y-axes. The topological and physical distances between
the nodes are not necessarily related. (B) The network representation enables the calculation of local, mesoscale, and global
features to describe the pattern of connections in topological space (as shown here) as well as the pattern of connections in
physical space (as we describe in the main text). This figure was adapted with permission from [1] and from [114].

to provide input to [127, 128]. Assuming this model of ses [133]. Moreover, both average and modal control-
dynamics, one can calculate the control energy required lability increase across development and are correlated
to reach specific brain states, which can be used a state with cognitive performance generally [134] and executive
dependent measure of the efficiency of control [129, 130]. function specifically [136]. The manner in which network
Control theory can also posit control metrics that quan- control tracks individual differences reflects the fact that
tify how efficiently a node would drive the brain to vari- the capacity for a network to enact control is dependent
ous states. Two commonly used metrics are average con- upon its topology [137, 138]. Further efforts are needed
trollability and modal controllability [131]. When every to distill exactly how spatial embedding and wiring con-
node is included in the control set, average controllabil- straints impinge on that control capacity, and how it is
ity is proportional to the average energy required to drive altered in psychiatric disorders [132] and neurological dis-
the node to any state [132]. Conversely, modal controlla- ease [139].
bility is high in nodes where a small input will result in
large perturbations to all eigenmodes of the system, and
is interpreted to be high in nodes that can easily drive BOX 4: OUTSTANDING QUESTIONS
the brain to hard-to-reach states [133–135].
If these properties are important for helping the brain • How do spatially constrained developmental pro-
transition between states, one would expect them not cesses constrain the formation of cycles in brain
to be randomly distributed across the cortex, but to networks?
be clustered into spatially constrained, functionally rele-
vant systems. More specifically, one might expect func- • What are the aspects of connectome topology that
tional systems that drive the brain to many accessible remain unexplained by wiring minimization or com-
states, such as the default mode system, to have high munication efficiency and thus may arise from more
average controllability, while regions that drive the brain subtle rules?
to hard-to-reach, cognitively demanding states (executive
control areas) to have high modal controllability. Data • How do the structural topologies that arise from
from healthy human adults supports these two hypothe- physical growth rules support functional gradients?
11

• To what extent can we link macro-scale structural


A topology with small scale developmental rules?
• Can a deeper understanding of connectome devel-
opment be used to help identify new biomarkers for
network diseases?
• What additional rules can be incorporated into gen-
B erative models of the brain to recapitulate its topol-
ogy?
• How can frontiers in network science help charac-
terize non-dyadic relationships in the brain?

GLOSSARY

• adjacency matrix: The adjacency matrix of a


graph is an N × N matrix, where N is the number
of nodes. Each element Aij of the matrix gives the
strength of the edge between nodes i and j.
C • cycle: In applied algebraic topology, a cycle is
an empty space (or lack of edges) in a graph sur-
rounded by all-to-all connected subgraphs of the
same dimension. The dimension here refers to the
number of nodes included in each all-to-all con-
nected subgraph.
• edge: From the perspective of graph theory, an
edge is a connection between nodes. From the per-
spective of neuroscience, an edge is a statistical de-
pendency (functional) or estimated physical con-
nection (structural) between nodes.
• geometry: The geometry of a network reflects fea-
tures of a graph in the context of physical space.
• hub: A central node in the network, typically hav-
ing many connections.
FIG. 5. Applied algebraic topology. (A) An illustration
of thresholding a weighted network across different densities • node: From the perspective of graph theory, a
(ρ). At ρ1 , a cavity of dimension 1 is born (shown in yellow), node is the unit where edges connect in the graph.
which then dies at ρ3 . (B) The characteristic pattern of births
From the perspective of neuroscience, a node is a
and deaths (called Betti curves) for cycles of dimension 1
(yellow), dimension 2 (red), and dimension 3 (blue) from a
brain region, neuron, or protein whose interactions
random network. The cliques of each dimension are shown one wishes to understand.
near each corresponding Betti curve for reference. (C) The
• topology: The quantification of features of a
same pattern, but for geometric networks. Different lines of
the same color indicate different dimensions of embedding. graph in the context of space defined by the graph
This figure was adapted with permission from [45]. itself, without respect to any physical embedding.

HIGHLIGHTS

• What is the precise relationship between invari- • The physical embedding of neural systems imposes
ant features of brain activity and the underlying constraints on the possible patterns of connections
anatomical structure? that form, and in turn on the possible repertoire
of functional motifs. Utilizing new tools from net-
• How does the development of the connectome de- work science that account for these constraints can
termine spatial progression of activity through the help researchers understand fundamentals of brain
cortex in health and disease? dynamics in health and disease.
12

• Prominent, competing rules guiding the forma- acknowledge support from the John D. and Catherine T.
tion of brain networks include the minimization of MacArthur Foundation, the Alfred P. Sloan Foundation,
wiring cost, and the maximization of communica- the ISI Foundation, the Paul Allen Foundation, the Army
tion efficiency. These competing mechanisms lead Research Laboratory (W911NF-10-2-0022), the Army
to networks with high local clustering with sparse Research Office (Bassett-W911NF-14-1-0679, Grafton-
long distance connections. W911NF-16-1-0474, DCIST-W911NF-17-2-0181), the
Office of Naval Research, the National Institute of
• Recent work suggests that intrinsic functional con- Mental Health (2-R01-DC-009209-11, R01-MH112847,
nectivity varies along dimensions that are tightly R01-MH107235, R21-M MH-106799), the National
linked to the spatial embedding of the brain and Institute of Child Health and Human Development
the topological properties that arise in the pres- (1R01HD086888-01), National Institute of Neurological
ence of spatial constraints. Similarly, these topo- Disorders and Stroke (R01 NS099348), and the National
logical properties show widespread changes in the Science Foundation (BCS-1441502, BCS-1430087, NSF
context of the network diseases such as epilepsy and PHY-1554488 and BCS-1631550). The content is solely
schizophrenia. the responsibility of the authors and does not necessarily
• A rich, and continually growing repertoire of statis- represent the official views of any of the funding agencies.
tics, null models, and generative models exist to aid
researchers in testing focused hypotheses about the
role of physical embedding in observed neural phe-
nomena.

ACKNOWLEDGMENTS

We would like to thank Ann Sizemore for her help with


Box 2: Applied Algebraic Topology. D.S.B. and J.S.

[1] D. S. Bassett and O. Sporns, Nature Neuroscience 20, [15] C. Cherniak, The Journal of Neuroscience 14, 2418
353 (2017), arXiv:0106096v1 [arXiv:cond-mat]. (1994).
[2] D. S. Bassett, P. Zurn, and J. I. Gold, Nature Reviews [16] C. Cherniak, Z. Mokhtarzada, and R. Rodriguez-
Neuroscience In Press (2018). Esteban, Evolution of Nervous Systems 1, 269 (2010).
[3] J. D. Medaglia, M. E. Lynall, and D. S. Bassett, J Cogn [17] A. Raj and Y.-H. Chen, PLoS ONE 6 (2011), 10.1371/.
Neurosci 27, 1471 (2015). [18] M. Ria Ercsey-Ravasz, N. T. Markov, C. Lamy, D. C.
[4] S. E. Petersen and O. Sporns, Neuron 88, 207 (2015). Van Essen, K. Knoblauch, Z. N. Toroczkai, and
[5] C. Ducruet and L. Beauguitte, Networks and Spatial H. Kennedy, Neuron 80, 184 (2013).
Economics 14, 297 (2014). [19] H. F. Song, H. Kennedy, and X.-J. Wang, Proceedings
[6] M. Barthélemy, Physics Reports 499, 1 (2011). of the National Academy of Sciences 111, 16580 (2014).
[7] E. Bullmore and O. Sporns, Nature Reviews Neuro- [20] D. S. Bassett, D. L. Greenfield, A. Meyer-Lindenberg,
science 13, 336 (2012), arXiv:NIHMS150003. D. R. Weinberger, S. W. Moore, and E. T. Bull-
[8] Y. Chen, S. Wang, C. C. Hilgetag, and C. Zhou, PLoS more, PLoS Comput Biol 6 (2010), 10.1371/jour-
computational biology 13, e1005776 (2017). nal.pcbi.1000748.
[9] L. French and P. Pavlidis, PLoS Comput Biol 7 (2011), [21] M. Kaiser and C. C. Hilgetag, PLoS Computational Bi-
10.1371/journal.pcbi.1001049. ology 2, 0805 (2006), arXiv:0607034 [q-bio].
[10] M. Rubinov, R. J. F. Ypma, C. Watson, and E. T. Bull- [22] A. Zalesky, A. Fornito, G. F. Egan, C. Pantelis, and
more, Proceedings of the National Academy of Sciences E. T. Bullmore, Human Brain Mapping 33, 2535 (2012),
112, 10032 (2015). arXiv:arXiv:1011.1669v3.
[11] P. E. Vertes, A. F. Alexander-Bloch, N. Gogtay, J. N. [23] A. Avena-Koenigsberger, B. Misic, and
Giedd, J. L. Rapoport, and E. T. Bullmore, Proceed- O. Sporns, Nature Publishing Group 19 (2018),
ings of the National Academy of Sciences 109, 5868 10.1038/nrn.2017.149.
(2012). [24] Y. Chen, S. Wang, C. C. Hilgetag, and C. Zhou, PLoS
[12] S. B. Laughlin and T. J. Sejnowski, Science 108, 22 Comput Biol 9 (2013), 10.1371/journal.pcbi.1002937.
(2002). [25] V. Nicosia, P. E. Vertes, W. R. Schafer, V. Latora, and
[13] J. E. Niven and S. B. Laughlin, Journal of Experimental E. T. Bullmore, Proceedings of the National Academy
Biology (2008), 10.1242/jeb.017574. of Sciences 110, 7880 (2013), arXiv:1304.7364.
[14] J. M. L. Budd and Z. F. Kisvárday, Frontiers in Neu- [26] M. Kaiser, Trends in Cognitive Sciences 21, 703 (2017).
roanatomy 6 (2012), 10.3389/fnana.2012.00042. [27] M. M. Sperry, Q. K. Telesford, F. Klimm, and D. S.
Bassett, Journal of Complex Networks 5, 199 (2017).
13

[28] F. Klimm, D. S. Bassett, J. M. Carlson, P. J. Mucha, [53] S. Jbabdi, S. N. Sotiropoulos, and T. E. Behrens,
and C. C. Hilgetag, PLoS Comput Biol 10 (2014), Current Opinion in Neurobiology 23, 207 (2013),
10.1371/journal.pcbi.1003491. arXiv:NIHMS150003.
[29] J. A. Pineda-Pardo, K. Martinez, A. B. Solana, J. A. [54] J. M. Huntenburg, P. L. Bazin, and D. S. Margulies,
Hernandez-Tamames, R. Colom, and F. del Pozo, Brain Trends in Cognitive Sciences 22, 21 (2018).
Topogr 28, 187 (2015). [55] D. S. Margulies, S. S. Ghosh, A. Goulas, M. Falkiewicz,
[30] A. J. Sadovsky and J. N. MacLean, J Neurosci 34, 7769 J. M. Huntenburg, G. Langs, G. Bezgin, S. B. Eick-
(2014). hoff, F. X. Castellanos, M. Petrides, E. Jefferies, and
[31] P. Christie and D. Stroobandt, IEEE Trans. on VLSI J. Smallwood, Proceedings of the National Academy of
Systems 8, 639 (2000). Sciences 113, 12574 (2016), arXiv:arXiv:1408.1149.
[32] F. Alcalde Cuesta, P. Gonzalez Sequeiros, and [56] M. P. van den Heuvel and O. Sporns, Trends in Cogni-
A. Lozano Rojo, Sci Rep 7, 5378 (2017). tive Sciences 17, 683 (2013).
[33] R. F. Betzel and D. S. Bassett, Pro- [57] H. Lu, Q. Zou, H. Gu, M. E. Raichle, E. A. Stein, and
ceedings of the National Academy of Sci- Y. Yang, Proceedings of the National Academy of Sci-
ences (2018), 10.1073/pnas.1720186115, ences 109, 3979 (2012).
http://www.pnas.org/content/early/2018/05/07/1720186115.full.pdf.
[58] A. Mitra, A. Z. Snyder, T. Blazey, and M. E. Raichle,
[34] J. A. Henderson and P. A. Robinson, Physical Review Proc Natl Acad Sci 17112, 2235 (2015).
Letters 107 (2011), 10.1103/PhysRevLett.107.018102. [59] M. E. Raichle, Annual Review of Neuroscience 38, 433
[35] H. J. Park and K. Friston, Science 342 (2013), (2015), arXiv:0402594v3 [arXiv:cond-mat].
10.1126/science.1238411. [60] A. Mitra, A. Z. Snyder, C. D. Hacker, and M. E.
[36] O. Sporns, D. R. Chialvo, M. Kaiser, and C. C. Hilge- Raichle, Journal of Neurophysiology 111, 2374 (2014).
tag, Trends in Cognitive Sciences 8, 418 (2004). [61] A. Mitra, A. Kraft, P. Wright, B. Acland, A. Z. Sny-
[37] M. P. van den Heuvel, R. S. Kahn, J. Goni, and der, Z. Rosenthal, L. Czerniewski, A. Bauer, L. Snyder,
O. Sporns, Proceedings of the National Academy of Sci- J. Culver, J.-M. Lee, and M. E. Raichle, Neuron (2018),
ences 109, 11372 (2012). https://doi.org/10.1016/j.neuron.2018.03.015.
[38] J. Seidlitz, F. Váša, M. Shinn, R. Romero-Garcia, [62] A. Mitra, A. Z. Snyder, E. Tagliazucchi, H. Laufs, and
K. J. Whitaker, P. E. Vértes, K. Wagstyl, P. Kirk- M. E. Raichle, eLife 4, 1 (2015).
patrick Reardon, L. Clasen, S. Liu, A. Messinger, D. A. [63] U. Braun, A. Schaefer, R. F. Betzel, H. Tost, A. Meyer-
Leopold, P. Fonagy, R. J. Dolan, P. B. Jones, I. M. Lindenberg, and D. S. Bassett, Neuron 97, 14 (2018).
Goodyer, A. Raznahan, and E. T. Bullmore, Neuron [64] C. J. Stam, Nat Rev Neurosci 15, 683 (2014).
97, 231 (2018). [65] D. S. Bassett, C. H. Xia, and T. D. Satterthwaite, Biol
[39] S. Varier and M. Kaiser, PLoS Comput Biol 7 (2011), Psychiatry Cogn Neurosci Neuroimaging S2451-9022,
10.1371/journal.pcbi.1001044. 30079 (2018).
[40] C. C. Hilgetag and A. Goulas, Brain Struct Funct 221, [66] V. K. Jirsa, W. C. Stacey, P. P. Quilichini, A. I.
2361 (2016). Ivanov, and C. Bernard, Brain 137, 2210 (2014),
[41] R. Perin, T. K. Berger, and H. Markram, Proceedings arXiv:arXiv:1011.1669v3.
of the National Academy of Sciences 108, 5419 (2011), [67] F. Wendling, F. Bartolomei, J. J. Bellanger, J. Bourien,
arXiv:arXiv:1408.1149. and P. Chauvel, Brain 126, 1449 (2003).
[42] A. E. Sizemore, C. Giusti, A. Kahn, J. M. Vettel, R. F. [68] A. C. Chamberlain, J. Viventi, J. A. Blanco, D. H. Kim,
Betzel, and D. S. Bassett, Journal of Computational J. A. Rogers, and B. Litt, Proceedings of the Annual
Neuroscience 44, 1 (2017), arXiv:1608.03520. International Conference of the IEEE Engineering in
[43] M. Kahle, “Random Geometric Complexes,” (2018). Medicine and Biology Society, EMBS , 761 (2011).
[44] M. Kahle, Discrete Mathematics 309, 1658 (2009), [69] J. Viventi, D. H. Kim, L. Vigeland, E. S. Frechette, J. A.
arXiv:0605536 [math]. Blanco, Y. S. Kim, A. E. Avrin, V. R. Tiruvadi, S. W.
[45] C. Giusti, E. Pastalkova, C. Curto, and V. It- Hwang, A. C. Vanleer, D. F. Wulsin, K. Davis, C. E.
skov, Proc Natl Acad Sci U S A 112, 13455 (2015), Gelber, L. Palmer, J. Van Der Spiegel, J. Wu, J. Xiao,
arXiv:1502.06172. Y. Huang, D. Contreras, J. A. Rogers, and B. Litt,
[46] D. J. Watts and S. H. Strogatz, Nature 393, 440 (1998), Nature Neuroscience 14, 1599 (2011).
arXiv:0803.0939v1. [70] L. R. González-Ramı́rez, O. J. Ahmed, S. S. Cash, C. E.
[47] C. T. Shih, O. Sporns, S. L. Yuan, T. S. Su, Y. J. Lin, Wayne, and M. A. Kramer, PLoS Computational Biol-
C. C. Chuang, T. Y. Wang, C. C. Lo, R. J. Greenspan, ogy 11 (2015), 10.1371/journal.pcbi.1004065.
and A. S. Chiang, Curr Biol 25, 1249 (2015). [71] K. A. Richardson, S. J. Schiff, and B. J. Gluck-
[48] M. Kaiser and C. C. Hilgetag, Phys Rev E Stat Nonlin man, Physical Review Letters 94 (2005), 10.1103/Phys-
Soft Matter Phys 69, 036103 (2004). RevLett.94.028103.
[49] D. S. Bassett and E. T. Bullmore, Neuroscientist 23, [72] M. Ursino and G. E. La Cara, Journal of Theoretical
499 (2017), arXiv:1608.05665. Biology 242, 171 (2006).
[50] A. Avena-Koenigsberger, J. Goni, R. Sole, and [73] L. E. Martinet, G. Fiddyment, J. R. Madsen, E. N.
O. Sporns, Journal of The Royal Society Interface 12, Eskandar, W. Truccolo, U. T. Eden, S. S. Cash,
20140881 (2014). and M. A. Kramer, Nature Communications 8 (2017),
[51] L. H. Scholtens, R. Schmidt, M. A. de Reus, and M. P. 10.1038/ncomms14896.
van den Heuvel, J Neurosci 34, 12192 (2014). [74] J. M. Beggs and N. Timme, Frontiers in Physiology 3
[52] M. P. van den Heuvel, L. H. Scholtens, L. Feldman Bar- JUN, 1 (2012).
rett, C. C. Hilgetag, and M. A. de Reus, J Neurosci 35, [75] M. Massimini, Journal of Neuroscience 24, 6862 (2004).
13943 (2015).
14

[76] D. Rubino, K. A. Robbins, and N. G. Hatsopoulos, [100] P. Expert, T. Evans, V. D. Blondel, and R. Lam-
Nature Neuroscience 9, 1549 (2006). biotte, Proceedings of the National Academy of Sciences
[77] K. Takahashi, S. Kim, T. P. Coleman, K. A. Brown, (2010), 10.1073/pnas.1018962108, arXiv:1012.3409.
A. J. Suminski, M. D. Best, and N. G. Hatsopoulos, Na- [101] M. Sarzynska, E. A. Leicht, G. Chowell, M. A. Porter,
ture Communications 6 (2015), 10.1038/ncomms8169. D. Bassett, and M. Sarzynska, Journal of Complex Net-
[78] K. Takahashi, M. Saleh, R. D. Penn, and N. G. Hat- works (2015), 10.1093/comnet/cnv027.
sopoulos, Frontiers Human Neuroscience 5, 40 (2011). [102] J. A. Roberts, A. Perry, A. R. Lord, G. Roberts, P. B.
[79] H. Zhang, A. J. Watrous, A. Patel, and J. Jacobs, Neu- Mitchell, R. E. Smith, F. Calamante, and M. Breaks-
ron 98, 1269 (2018). pear, NeuroImage 124, 379 (2016).
[80] A. Benucci, R. A. Frazor, and M. Carandini, Neuron [103] D. Samu, A. K. Seth, and T. Nowotny, PLoS Computa-
55, 103 (2007). tional Biology 10 (2014), 10.1371/journal.pcbi.1003557.
[81] L. Douw, M. N. DeSalvo, N. Tanaka, A. J. Cole, H. Liu, [104] M. Wiedermann, J. F. Donges, J. Kurths, and R. V.
C. Reinsberger, and S. M. Stufflebeam, Annals of Clin- Donner, Physical Review E 93 (2016), 10.1103/Phys-
ical and Translational Neurology 2, 338 (2015). RevE.93.042308, arXiv:1509.09293.
[82] L. Bonilha, T. Nesland, G. U. Martz, J. E. Joseph, M. V. [105] X. Cui, J. Xiang, H. Guo, G. Yin, H. Zhang, F. Lan,
Spampinato, J. C. Edwards, and A. Tabesh, J Neurol and J. Chen, Front Comput Neurosci 12, 31 (2018).
Neurosurg Psychiatry 83, 903 (2012). [106] T. W. P. Janssen, A. Hillebrand, A. Gouw, K. Gelade,
[83] M. N. DeSalvo, L. Douw, N. Tanaka, C. Reinsberger, R. Van Mourik, A. Maras, and J. Oosterlaan, Clin Neu-
and S. M. Stufflebeam, Radiology (2014), 10.1148/ra- rophysiol 128, 2258 (2017).
diol.13131044. [107] D. J. Smit, E. J. de Geus, M. Boersma, D. I. Boomsma,
[84] V. K. Jirsa, T. Proix, D. Perdikis, M. M. Woodman, and C. J. Stam, Brain Connect 6, 312 (2016).
H. Wang, C. Bernard, C. Bı̈nar, P. Chauvel, F. Bar- [108] P. Blinder, P. S. Tsai, J. P. Kaufhold, P. M. Knutsen,
tolomei, F. Bartolomei, M. Guye, J. Gonzalez-Martinez, H. Suhl, and D. Kleinfeld, Nat Neurosci 16, 889 (2013).
and P. Chauvel, NeuroImage 145, 377 (2017). [109] F. Schmid, P. S. Tsai, D. Kleinfeld, P. Jenny, and
[85] D. S. Bassett, E. Bullmore, B. A. Verchinski, V. S. Mat- B. Weber, PLoS Comput Biol 13, e1005392 (2017).
tay, D. R. Weinberger, and A. Meyer-Lindenberg, Jour- [110] R. F. Betzel and D. S. Bassett, J R Soc Interface 14,
nal of Neuroscience 28, 9239 (2008). 20170623 (2017).
[86] A. Griffa, P. S. Baumann, J. P. Thiran, and P. Hag- [111] R. F. Betzel, A. Avena-Koenigsberger, J. Goñi, Y. He,
mann, NeuroImage 80, 515 (2013). M. A. de Reus, A. Griffa, P. E. Vértes, B. Mišic, J. P.
[87] A. F. Alexander-Bloch, P. E. Vértes, R. Stidd, Thiran, P. Hagmann, M. van den Heuvel, X. N. Zuo,
F. Lalonde, L. Clasen, J. Rapoport, J. Giedd, E. T. Bull- E. T. Bullmore, and O. Sporns, NeuroImage 124, 1054
more, and N. Gogtay, Cerebral Cortex 23, 127 (2013). (2016), arXiv:1506.06795.
[88] M. Kaiser, NeuroImage 57, 892 (2011). [112] C. T. Butts, Science 325, 414 (2009),
[89] J. M. Duarte-Carvajalino, N. Jahanshad, C. Lenglet, arXiv:arXiv:1010.0725v1.
K. L. McMahon, G. I. De Zubicaray, N. G. Martin, M. J. [113] M. Rubinov and O. Sporns, NeuroImage 52, 1059
Wright, P. M. Thompson, and G. Sapiro, NeuroImage (2010).
59, 3784 (2012), arXiv:NIHMS150003. [114] J. O. Garcia, A. Ashourvan, S. F. Muldoon, J. M. Vettel,
[90] J. Buhl, J. Gautrais, R. V. Sole, P. Kuntz, S. Valverde, and D. S. Bassett, Proceedings of the IEEE 106, 846
J. L. Deneubourg, and G. Theraulaz, The European (2018).
Physical Journal B 42, 123 (2004). [115] C. Giusti, R. Ghrist, and D. S. Bassett, J Comput
[91] L. Papadopoulos, P. Blinder, H. Ronellenfitsch, Neurosci 41, 1 (2016).
F. Klimm, E. Katifori, D. Kleinfeld, and D. S. Bas- [116] A. Zomorodian and G. Carlsson, Discrete and Compu-
sett, arxiv 1612, 08058 (2018). tational Geometry 33, 249 (2005).
[92] S. Fortunato and D. Hric, Physics Reports 659, 1 [117] G. Carlsson, Bulletin of the American Mathematical So-
(2016). ciety 46, 255 (2009), arXiv:arXiv:1312.6184v5.
[93] K. W. Doron, D. S. Bassett, and M. S. Gazzaniga, [118] A. E. Sizemore, J. Phillips-Cremins, R. Ghrist, and
Proceedings of the National Academy of Sciences 109, D. S. Bassett, arXiv 1806, 05167 (2018).
18661 (2012). [119] P. Dotko, K. Hess, R. Levi, M. Nolte, M. Reimann,
[94] L. R. Chai, M. G. Mattar, I. A. Blank, E. Fedorenko, M. Scolamiero, K. Turner, E. Muller, and H. Markram,
and D. S. Bassett, Cereb Cortex 26, 4148 (2016). Frontiers in Computational Neuroscience 11, 48 (2016),
[95] X. He, D. S. Bassett, G. Chaitanya, M. R. Sperling, arXiv:1601.01580.
L. Kozlowski, and J. I. Tracy, Brain , 141 (2018). [120] G. Petri, M. Scolamiero, I. Donato, F. Vaccarino, and
[96] M. E. J. Newman and M. Girvan, Physical Review R. Lambiotte, PLoS ONE 8 (2013), 10.1371/.
E (2003), 10.1103/PhysRevE.69.026113, arXiv:0308217 [121] A. Sizemore, C. Giusti, and D. S. Bassett, Journal of
[cond-mat]. Complex Networks 5, 245 (2017), arXiv:1512.06457.
[97] R. F. Betzel, J. D. Medaglia, and D. S. Bassett, Nat [122] D. Horak, S. Maleti, and M. Rajkovi, Stat. Mech
Commun 9, 346 (2018). (2009), 10.1088/1742-5468/2009/03/P03034.
[98] L. Papadopoulos, J. G. Puckett, K. E. Daniels, and [123] O. Bobrowski and M. Kahle, Journal of Applied and
D. S. Bassett, Phys Rev E 94, 032908 (2016). Computational Topology 1, 10.1007/s41468-017-0010-0.
[99] R. F. Betzel, J. D. Medaglia, L. Papadopoulos, [124] P. Dotko, K. Hess, R. Levi, M. Nolte, M. Reimann,
G. Baum, R. Gur, R. Gur, D. Roalf, T. D. Satterth- M. Scolamiero, K. Turner, E. Muller, and H. Markram,
waite, and D. S. Bassett, Network Neuroscience (2016), Frontiers in Computational Neuroscience 1, 1 (2016),
arXiv:1608.01161. 1601.01580.
15

[125] S. J. Schiff, Neural Control Enginerring. The emerg- [133] S. Gu, F. Pasqualetti, M. Cieslak, Q. K. Telesford, A. B.
ing intersection between control theory and neuroscience Yu, A. E. Kahn, J. D. Medaglia, J. M. Vettel, M. B.
(MIT Press, 2011). Miller, S. T. Grafton, and D. S. Bassett, Nature Com-
[126] Y. Y. Liu, J. J. Slotine, and A. L. munications 6, 1 (2015), arXiv:1406.5197.
Barabási, Nature 473, 167 (2011), [134] E. Tang, C. Giusti, G. L. Baum, S. Gu, E. Pollock, A. E.
arXiv:/www.nature.com/nature/journal/v473/n7346/abs/10.1038- Kahn, D. R. Roalf, T. M. Moore, K. Ruparel, R. C. Gur,
nature10011-unlocked.html#supplementary- R. E. Gur, T. D. Satterthwaite, and D. S. Bassett,
information [http:]. Nature Communications 8 (2017), 10.1038/s41467-017-
[127] A. E. Bryson, IEEE Control Systems 16, 26 (1996). 01254-4, arXiv:1607.01010.
[128] G. Yan, P. E. Vertes, E. K. Towlson, Y. L. Chew, D. S. [135] E. Wu-Yan, R. F. Betzel, E. Tang, S. Gu, F. Pasqualetti,
Walker, W. R. Schafer, and A. L. Barabasi, Nature and D. S. Bassett, Journal of Nonlinear Science (2017),
550, 519 (2017). arXiv:1706.05117.
[129] S. Gu, R. F. Betzel, M. G. Mattar, M. Cieslak, P. R. [136] E. J. Cornblath, E. Tang, G. L. Baum, T. M. Moore,
Delio, S. T. Grafton, F. Pasqualetti, and D. S. Bassett, D. R. Roalf, R. C. Gur, R. E. Gur, F. Pasqualetti, T. D.
Neuroimage 148, 305 (2017). Satterthwaite, and D. S. Bassett, arXiv 1801, 04623
[130] R. F. Betzel, S. Gu, J. D. Medaglia, F. Pasqualetti, and (2018).
D. S. Bassett, Sci Rep 6, 30770 (2016). [137] T. Menara, V. Katewa, D. S. Bassett, and
[131] F. Pasqualetti, S. Zampieri, and F. Bullo, IEEE Trans- F. Pasqualetti, arXiv 1706, 05120 (2018).
actions on Control of Network Systems 1, 40 (2014). [138] J. Z. Kim, J. M. Soffer, A. E. Kahn, J. M. Vettel,
[132] J. Jeganathan, A. Perry, D. S. Bassett, G. Roberts, P. B. F. Pasqualetti, and D. S. Bassett, Nat Phys 14, 91
Mitchell, and M. Breakspear, NeuroImage Clinical In (2018).
Press (2018). [139] P. N. Taylor, J. Thomas, N. Sinha, J. Dauwels,
M. Kaiser, T. Thesen, and J. Ruths, Front Neurosci
9, 202 (2015).

You might also like