PIIS1364661323000992

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 19

TICS 2439 No.

of Pages 19

Trends in
Cognitive Sciences OPEN ACCESS

Toward computational neuroconstructivism: a


framework for developmental systems
neuroscience
Duncan E. Astle, 1,2,* Mark H. Johnson, 3,4 and Danyal Akarca 2

Brain development is underpinned by complex interactions between neural assem- Highlights


blies, driving structural and functional change. This neuroconstructivism (the notion Neuroconstructivism proposes that, as
that neural functions are shaped by these interactions) is core to some developmen- cognitive processes develop, the func-
tional roles of neuronal populations are
tal theories. However, due to their complexity, understanding underlying develop- shaped by their complex interactions
mental mechanisms is challenging. Elsewhere in neurobiology, a computational with one another across development.
revolution has shown that mathematical models of hidden biological mechanisms However, methodological constraints
can bridge observations with theory building. Can we build a similar computational have limited the study of this develop-
mental complexity.
framework yielding mechanistic insights for brain development? Here, we outline
the conceptual and technical challenges of addressing this theory gap, and demon- Recent methodological advances in
strate that there is great potential in specifying brain development as mathemati- computational modelling offer a new
cally defined processes operating within physical constraints. We provide frontier for answering developmental
systems neuroscience questions, with-
examples, alongside broader ingredients needed, as the field explores computa- out losing complexity. We term this fron-
tional explanations of system-wide development. tier ‘computational neuroconstructivism’.
The central challenge for this new frontier
is the integration of two hitherto disparate
Bridging cognitive development and systems neuroscience literatures: the first is computational
models of cognitive development, the
To understand the functional role of neural circuits we must understand the developmental history of
second is computational models that im-
those circuits. This is the guiding principle underpinning developmental systems neuroscience plement biophysical constraints.
(see Glossary). Put simply, we must study trajectories of developmental change [1,2] rather than
end-state outcomes alone, because the functional role of any neural assembly is shaped by its in- We review the broad literature of com-
putational models as applied to cogni-
teractions with other assemblies, across development. This phenomenon is sometimes termed tive development and developmental
‘enbrainment’ [3,4]. Over the past 30 years, it has been argued that this perspective should be central neuroscience. Specifically, we highlight
to how we understand cognitive development [3–9]. However, producing robust theoretical models recent advances in generative network
modelling that strongly resonate with
that both incorporate this developmental perspective and encompass biological developmental this computational neuroconstructivist
mechanisms, represents one of the main conceptual and technical challenges in the field. Broadly framework. We also highlight ways in
speaking, there are two contrasting approaches to addressing the gap between cognitive develop- which this modelling currently falls
short of a full neuroconstructivist ac-
ment and biological mechanisms. The first is to create models that are neurobiologically inspired
count and suggest ways that it could
and that can learn to perform cognitive tasks. The second, more recent, approach is to create models further bridge the gap between models
that seek to explain the developing neurobiology itself. Here, we briefly review the former, then intro- of neural and cognitive development.
duce the latter approach. Ultimately, these two approaches attempt to bridge this theoretical gap
We highlight the key hallmarks of a good
from opposite starting points, but both offer key ingredients for building theory that integrates cogni- developmental systems neuroscience
tive development and systems neuroscience. model and outline future areas of prom-
ise for implementing these within compu-
tational frameworks.
Development is probabilistic not deterministic
One reason that theoretical progress has been challenging is because development is probabilistic,
not deterministic (see deterministic epigenesis). A deterministic view of brain development is simply
1
Department of Psychiatry, University of
that genes code for neuroanatomy, neurochemistry, and neurophysiology, which in turn determine
Cambridge, Cambridge, CB2 2QQ, UK
function, and this consequentially enables particular cognitive processes to come ‘online’ at particular 2
MRC Cognition and Brain Sciences
ages. According to this view, cognitive processes become adult-like when their underpinning neurobi- Unit, University of Cambridge, Cam-
bridge, CB2 7EF, UK
ology reaches maturity. This simple logic implicitly underpins much of cognitive neuroscience and the 3
Department of Psychology, University
study of developmental disorders [10,11]; however, this ‘deterministic epigenesis’ cannot adequately of Cambridge, Cambridge, CB2 3EB, UK

Trends in Cognitive Sciences, Month 2023, Vol. xx, No. xx https://doi.org/10.1016/j.tics.2023.04.009 1


© 2023 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
Trends in Cognitive Sciences
OPEN ACCESS

4
Centre for Brain and Cognitive Develop-
explain a wide range of phenomena [12–17]. Instead, functional activity emerges before brain struc-
ment, Birkbeck, University of London,
tures are fully mature, supporting gradually emerging cognition and behaviour. This functional activity, London, WC1E 7JL, UK
and the nascent computational processes it supports, themselves shape the ongoing development of
the neural circuitry in a highly interdependent fashion [18,19]. In other words, developmental change *Correspondence:
[1,2] at any level of analysis is not in fact deterministic, but is instead sensitive to other ongoing Duncan.Astle@mrc-cbu.cam.ac.uk
(D.E. Astle).
developmental processes, both across and within levels. This alternative view, termed ‘probabilistic
epigenesis’ [19,20] is a foundational principle of a constructivist approach to developmental cognitive
and systems neurosciences. However, historically, it has been hard to instantiate this principle into any
formal theory because it has been challenging to parameterise all of the potentially relevant interactions,
or even to identify a framework with sufficient flexibility that it can be used to cycle through potential
interactions to establish which are most important.

This neuroconstructivist perspective has challenged the wider fields of cognitive psychology
and neuroscience because it implies that current cognition or behaviour are themselves an out-
come of dynamic interactions that occurred within the developmental past of that organism [8].
That is, the brain progressively becomes constructed via its self-organisation, as the individual in-
teracts with the world around them, with individual components of its construction gradually
specialising over developmental time. Subsequently, the functional role of a brain area is shaped,
at least in part, by its interactions with, and inputs from, other areas. This process of functional
specification over development has been termed ‘interactive specialisation’ [5,6]. Interactive
specialisation is a domain-general framework that extends the concept of probabilistic epigenesis
and posits several phenomena that we should expect given this principle. First, connectivity be-
tween regions will be crucial for their functional development and, over time, the functional role of
a brain area cannot be considered independently from its connections. Second, networks
should show some degree of activity-dependent self-organisation. Third, rather than cognitive
processes simply becoming ‘operational’ when their corresponding neural circuits reach matu-
rity, there should be a dynamic flexibility in the functional role of neural processes over develop-
ment. As computational processes change, this may itself drive reorganisation of neural
networks. The field of developmental systems neuroscience has produced many findings that
can be viewed through this lens [21–27]. In summary, the principle of probabilistic epigenesis,
as implemented in frameworks, such as interactive specialisation, is an essential foundational in-
gredient for any plausible developmental theory [6,28,29].

Developmentally inspired computational models


Although frameworks, such as interactive specialisation, provide a narrative account for the prin-
ciples of developmental systems neuroscience, we are still without a formal computational model
that can be implemented, tested, and falsified. This critical step has proved difficult. From a prac-
tical perspective, the data needed to test any specific predictions will necessarily take years to
collect. If any predictions are not supported by the data then there could be any number of rea-
sons: the predicted effects could simply have occurred previously in development, or not yet, or
the neuroimaging method is not sensitive to the type of change predicted.

Elsewhere, in the domains of cognition, psychiatry, or neuroscience, a computational revolution is


underway [30–35]. The mathematical formalisation of theory is having a transformative impact on
our ability to integrate different phenomena mechanistically at many different levels of analysis. For
example, deep convolutional neural networks can model, to some extent, visual cortex in both
humans and monkeys [36]. Dynamic causal models, which aim to infer the causal architecture of
distributed dynamical systems, have been used to isolate cellular channel dysfunction by modelling
how altered synaptic currents may lead to changes in observed electroencephalography recordings

2 Trends in Cognitive Sciences, Month 2023, Vol. xx, No. xx


Trends in Cognitive Sciences
OPEN ACCESS

[37]. Recurrent neural network models of prefrontal cortex have been integrated with models of Glossary
phasic dopaminergic release to form a meta-reinforcement learning system that accounts for oth- Biophysically grounded: when a
erwise disparate observations [38]. These examples all showcase the value of computational mathematical model incorporates
constraints that exist within a physical
modelling for unpacking complex interdependent neurobiological processes. This modelling per- biological system.
spective is important because it is always relatively easy to interpret neuroscience findings in the Computational
context of a hypothesis, simply because many hypotheses are sufficiently flexible that they can en- neuroconstructivism: framework
aiming to computationally formalise
compass many observations. By contrast, a formalised mathematical model can simulate data that
neuroconstructivist principles by
would occur if the hypothesis (i.e., the model) were true. This is a subtle but meaningful difference parametrising interactions between
[39]. With this formalism, the hypothesis is hard to vary, allowing it to be falsified by examining the neuronal populations over time.
resultant cross-validated estimate of its accuracy. Connectome: set of neural
components and the connections
between them. They are typically
Computational models for understanding developmental processes are not new. Indeed, there is represented as a matrix, or equivalently as
a long history of attempts to capture developmental changes in performance with connectionist a graph of nodes and their interconnected
models [40–52]. These models were usually task specific and, although connectionist models are edges. The edges can reflect functional or
structural connectivity.
designed to mimic neuronal processes in a general sense, task learning is not constrained by
Deterministic epigenesis: genetic
specific biological processes. That is, although some models incorporated the insertion of new differences give rise to functional and/or
connections (e.g., [9,51]), mimicking synaptogenesis, they did not incorporate or attempt to fit ac- structural brain differences and, in turn,
tual neurobiological data from a participant. The rapid expansion of neuroimaging applications in differences in cognition and behaviour.
Developmental change: systematic,
paediatric populations meant that developmental systems neuroscience focused heavily on ob- long-lasting, and nonhaphazard
served neurobiology, and computational theory per se became less of a priority. However, the changes or improvements in children’s
dramatic growth in neuroimaging data in paediatric populations is no substitute for mechanistic abilities, such as thinking, perceiving, or
accounts that can be formally tested. reasoning. This change is due to
underlying developmental mechanisms.
Developmental mechanism: active,
One prominent computational model used by developmentalists has been the cascade– dynamic, multistep, and time-
correlation (CC) neural network [53,54]. In these architectures, simple input–output networks dependent processes, which may occur
in multiple steps or across multiple levels
are trained to solve a supervised problem. Learning proceeds initially by the adjusting of weights
of analysis, resulting in a measurable
from input to output units. Eventually, performance stagnates due to the inherently limited com- change in cognition or behaviour.
putational capacity of the network. At this point, a new unit, which has been independently trained Developmental systems
to maximise the magnitude of its correlation with the error of the network (trained from a pool of neuroscience: subdiscipline of
neuroscience, systems biology, and
possible candidate units) becomes integrated within the network as a new hidden layer. This has
developmental neuroscience that
been argued to resemble neurogenesis and synaptogenesisi [55]. What results is a boost in the studies the structure and function of
task performance of the network, because this new integrated neuron can solve a particular com- neural circuits and systems as they
putation that the network previously could not [56]. This process occurs iteratively (in a cascade), develop over time; often very closely
related to developmental cognitive
building the organisation of the network over time to produce a final architecture constructed to
neuroscience, a branch of cognitive
solve the increasingly complex subproblems, constituting the overall problem at hand. In this neuroscience concerned with the
sense, these models draw upon neuroconstructivist principles, with cognitive routines being ac- intersection between cognitive processes
tively constructed over time, inspired by neurobiological principles. and systems-level brain development.
Generative model: computational
model that generates new data
Given that the CC network learns by progressively isolating errors and recruiting new units ac- structures.
cordingly, unlike most canonical artificial intelligence (AI) architectures, there is no need to Generative network model (GNM):
guess what size or topology the network needs in advance. Instead, CCs self-organise to solve special case of a generative model that
simulates the formation of networks,
problems effectively, in a way that limits redundancy. Early CCs could learn basic tasks more ro- typically in an iterative fashion.
bustly [53], required fewer units [57], and needed less learning time [58] relative to benchmarks Homophily: preference for similarity to
available at the time. The observation that independently trained components can be incorpo- oneself. The nature of this similarity can
take a range of forms. In social networks,
rated to boost performance motivated an extension of the CC architecture. This next generation
homophily refers to the principle that a
of models were termed ‘knowledge-based cascade correlations’ (KBCCs) [59]. KBCCs can re- contact between similar people occurs
cruit whole subnetworks that have been previously trained to solve distinct problems [59]i,ii. As at a greater rate than between dissimilar
such, KBCCs use existing knowledge to aid learning, recruiting composed solutions without people. In neural networks, it may refer
to neurons with similar connectivity
changing the recruited components [7,60]. While this has a clear resemblance to observations
profiles.
within human cognition [7,61], and clearly echoes neuroconstructivism, this is different from

Trends in Cognitive Sciences, Month 2023, Vol. xx, No. xx 3


Trends in Cognitive Sciences
OPEN ACCESS

more recent non-CC neural networks, in which compositionality is seen as problematic [62]. Ca- Interactive specialisation: hypothesis
that the functional role of any brain region
nonical neural networks will fail to use prior knowledge to learn because they do not have any: reflects its functional interactions with
they often start learning tabula rasa (i.e., commonly with random initialised weights and/or topol- other brain regions over development.
ogy at the onset of learning a task) [50]i. Network: way of capturing any
complex system that can be
represented by subunits (sometimes
The CC and KBCC architectures provide excellent examples of how models designed to acquire called nodes or neurons) and the
specific cognitive skills can be neurobiologically principled. This can be contrasted with compu- functional or structural relationships
tational models of the biological processes themselves. Instead, here the goal is not to capture between those subunits (sometimes
how a system might acquire a cognitive ability, but instead to provide some insight into the neu- called edges).
Neural assembly: group of neurons
robiology itself. Within developmental neuroscience, there are many models emerging that aim to that can be recruited together due to
provide insight into specific biological processes, such as synaptogenesis [63,64], plasticity synaptic connections between them,
[65,66], neuronal morphological change [67], consolidation [68], pruning [69,70], and axonal usually as a consequence of a learning
process.
guidance [71]. These computational models provide neurobiological insights beyond what
Neuroconstructivism: theory of
could be determined from the observed data alone [72], such as the role of adaptive synaptogen- cognitive development stating that
esis under metabolic constraints [63], the trade-off between cortical developmental pace and genomics, the environment, and
performance [64], and long-term learning-related changes in dendrites [65]. Computational ontogeny together interact, as
individuals interact with their
models of biological processes, such as those listed above, broadly function by grounding other-
environment, to progressively construct
wise mathematically defined processes within a biological context where there are finite re- the brain and cognitive processes.
sources and/or biologically reasoned axioms. As such, they retain the advantage of being Neuroconstructivism emphasises that
quantitative [73,74], but can be tailored to specific biological problems rather than remaining developmental context is crucial for
gradually forming a more specialised
mathematically abstract. In Figure 1, we provide a visualisation of computational models span- adult brain.
ning cognitive development to neurobiological development. The continuum captures models Probabilistic epigenesis: hypothesis
that learn to perform tasks inspired by neurobiological principles, to models that learn to capture that developmental mechanisms are
neurobiological mechanism themselves, but which, conversely, tend to be further removed from best understood as complex and
dynamic interactions between genetics,
cognitive development. brain, body and environment over time.

A challenge within developmental systems neuroscience has been to identify a computational ap-
proach that incorporates, and indeed is grounded in, our growing knowledge of brain develop-
ment. Put simply, we need computational models that are biophysically grounded, but that
have the flexibility to incorporate multiple different types of neural data.

A biophysically grounded model of development


A biophysically grounded computational model for developmental systems neuroscience will
need to be broader than the specific examples outlined previously. It will need to consider the

Trends in Cognitive Sciences

Figure 1. Computational models in development: from cognitive to neurobiological development. On the left are
computational models, such as the cascade–correlation (CC), which are fit to cognitive data to capture trajectories of change
in cognition. These typically have neurobiological principles intrinsic to how they work. On the right are computational models,
which are fit to neurobiological data to capture specific phenomena, such as the time course of synaptogenesis followed by
pruning. In the middle are models that may capture principles relevant to both cognitive and neurobiological development.
Based on [7,47,53,59,63,69,71,75,76,83,108].

4 Trends in Cognitive Sciences, Month 2023, Vol. xx, No. xx


Trends in Cognitive Sciences
OPEN ACCESS

complex constellation of connectivity, which mediates information transfer across the brain. The
challenge is to extend biophysically grounded computational thinking beyond the well-defined
scope of specific simple circuits or organisms to address broader neurodevelopmental
questions, drawing on whole-brain data from systems neuroscience (e.g., diffusion imaging
and genomics).

A new class of models, termed ‘generative network models’ (GNMs), may provide a good ex-
ample of how we can build a computational framework that incorporates some of these
neuroconstructivist principles (Box 1) [75–85]. GNMs (and their genomic-based variations
[84,86]) have large potential for studying human brain development. Most prominently, GNMs
have been considered in terms of an extension to the economic trade-off principle, which states
that the configuration of the brain can be accounted for by the economic balancing of wiring cost
minimisation and topological efficiency maximisation [87]. In the currently most widely used
implementation, single connections are added iteratively on a competitive basis, with the winner
emerging probabilistically according to the specific economic trade-off instantiated within the
model.

The parsimony of the GNM is one of its main advantages, because, given the model parameters
(which influence the trade-off of costs and values within the model), the network grows unsupervised,

Box 1. Constructing generative network models


Unlike other computational models (e.g., cascade correlation [53], recurrent [142], convolutional [143], or long short-term
memory [144] networks) or network evolutionary algorithms, (e.g., Barabási–Albert [145], Erdős–Rényi [146], Watts-
Strogatz [116], or Stochastic-block models [147]), GNMs embed regions (e.g., parcellated grey-matter regions) within a
3D space mirroring the neuroanatomy of the brain (Figure IA). A GNM can be fit to any network, provided the data are
of sufficient quality. This is important because a GNM is fitting individual-level neurobiological data. Within the literature
these include human whole-brain functional neuroimaging [75], diffusion neuroimaging [76,78,84,86], primate cortical
connectomes [80], ex vivo mouse neuroimaging [85], and microstructural in vitro networks [100]. To date, studies in
humans have simulated network formation measured within cortex via neuroimaging. Other biophysical model constraints
can be incorporated, such as the empirical lengths of fibres [77] or genomic co-expression between regions [84,86]
(Figure IB). We return to this later.

Elsewhere in computational modelling, the term ‘generative’ has a range of subtly different meanings, including neural net-
works containing top-down connections to generate sensory data [148] or agents having internal generative models to ex-
plain data in terms of their causes [149]. In this context, ‘generative’ means to simulate the network connectivity formation.
The first instantiation of a GNM was designed to probe the development of cortical networks by minimising wiring lengths
alone (i.e., only minimising modelled costs), starting simulations in medioposterior locations, enabling anterolateral growth
[83]. A topological value term was later added to model human macroscopic functional networks [75]. Recent developments
have extended how these models can be tuned to compare models [77,100]. Due to the parameterisation of the model, it is
possible to isolate relevant interactions over time. These models are falsifiable in that each simulation can be compared with
the observed network, using a variety of fit indices, which provide an estimate of how accurate the particular model is.

How do GNMs work? Instead of updating connections according to labelled data, as in supervised learning, connections
form according to two simple components. The first component is modelled as the ‘cost’ of a connection forming. This
biologically grounds the algorithm and is often represented as distances between brain regions. This is balanced against
the second component, captured as the ‘value’ driving the preferential formation of a connection, which can be manipu-
lated mathematically. The probability of connection formation dependents on the product of these two components, and
this updates iteratively over time as connections are formed (see [84] for an additive variation). Importantly, there is a dis-
tinction between the value of a possible connection and the probability of forming that connection, because the cost must
also be considered.

This algorithm biologically grounds the cost of network formation by pitting it against some other mathematically specified
property (or theoretically, properties) of the regions of the network. Crucially, this unfolds at the level of individual partici-
pants; for each participant, network formation is simulated by manipulating the parameters. The resulting network is then
mathematically compared with measured brain networks from that individual to assess their fit (Figure IC).

Trends in Cognitive Sciences, Month 2023, Vol. xx, No. xx 5


Trends in Cognitive Sciences
OPEN ACCESS

Trends in Cognitive Sciences

Figure I. Constructing generative network models (GNMs). (A) A simple way of incorporating biological constraints
into generative network modelling is to place an observed network (left) into the 3D space in which it is embedded (right). The
subsequent Euclidean distance matrix thereby grounds the system with some representation of physical costs for forming
connections. Incorporating biophysical constraints (such as distance) into the model allows for researchers to use mathemat-
ical explanations to bridge the study of complex biological systems. (B) Any number of biophysical constraints can be incor-
porated into a generative model, such as fibre lengths [77] (top) or genomic co-expression [84,86] (bottom). (C) The fitting
procedure works by first specifying a generative model. Subsequently, connections are added iteratively to a network accord-
ing to this model. The resulting simulation is compared with single network, which can correspond to single-subject data.
The objective of the procedure is to find a generative model that constructs networks with minimal statistical dissimilarity to
the observation. Abbreviations: A, anterior; I, inferior; L, lateral; M, medial; P, posterior; S, superior.

without requiring external information to guide it. This reduces the complexity (termed ‘compressibil-
ity’ [79]) of development into just a small number of components, improving interpretability and, there-
fore, this can help link the biological and computational accounts. However, this does not necessarily
mean that the model itself is simple. The costs and values matrices can contain detailed information
about biophysical constraints (e.g., gene transcription data [84,86]) or topological features
(e.g., community structure, [76]). It is the parameterisation of their trade-off that is simple.

To provide a concrete example, recent work compared the ability of 13 different generative rules to
simulate the emergence of whole-brain organisation in a large sample of children [76]. Each of
these 13 different rules incorporated a different nongeometric topological ‘values’ term, such as the
neighbourhood overlap or the average node degree (the total number of connections a node has) be-
tween each pair of nodes. By mathematically specifying the type of information available to the algo-
rithm, the authors were able to systematically test which types of information were needed for realistic
networks to be simulated (Figure 2A). An energy equation [77] then provides a quantitative test of how
well each simulation is able to recapitulate the global properties of children’s whole-brain structural
networks. There was a clear winner: rules based upon the matching connectivity profile of the
nodes, termed ‘homophily’, which we revisit below in more depth, provided the closest approxima-
tion of real brain networks. When this principle is traded off against a distance penalty, highly plausible
networks emerge. Interestingly, while these rules perform better than the alternatives, they conversely

6 Trends in Cognitive Sciences, Month 2023, Vol. xx, No. xx


Trends in Cognitive Sciences
OPEN ACCESS

Trends in Cognitive Sciences

Figure 2. Evaluating competing developmental hypothesises with generative models. Given that hypotheses can
be formalised mathematically (in this case, wiring hypotheses), they can be evaluated via a common statistical benchmark. In
this example, competing hypotheses are evaluated according to an energy equation [77], which is defined as the maximum of
a set of Kolmogorov–Smirnov distances (a measure of dissimilarity) across a range of regional network statistics. Therefore,
the lower the energy value the better the fit. (A) Each boxplot represents a candidate hypothesis for how wiring may occur.
The y-axis represents the energy that is attained under this hypothesis. In this instance, each data point is the energy of sim-
ulated network fit to a child’s macroscopic structural connectome from the CALM sample (adapted, with permission, from
[76]). Three hypothesises are explicitly highlighted to illustrate how each model can be considered as a distinct hypothesis.
(B) The parameter space shows the fit of a homophily generative rule that performed best, given two scalars that
parameterise the wiring equation; η, which relates to costs, and γ, which relates to topological homophily. This narrow energy
window is thought to reflect the tight regulation of wiring economy balance that must be met for there to be a structural net-
work that facilitates function. The right figure reflects a 3D representation of the left figure.

have the least ‘room’ for manoeuvre in terms of their parameter space. That is, outside of a very nar-
row set of parameters, the simulated networks are highly divergent from those observed. In other
words, within the highly regulated nature of neurodevelopment, individual differences occur within
very narrow bounds (Figure 2B). This may appear counterintuitive, because traditionally it is common
to aim to recapitulate observed phenomena across a wide range of parameter combinations. How-
ever, in this context, we are simulating complex networks that have a highly prototypical organisation.
Large deviations in network organisation are not generally seen in nature. However, just because a
mechanism is highly regulated in mathematical terms does not mean that it is highly determined in de-
velopmental terms. We may not observe large differences in the mathematics of network formation,
but differences we do observe could be associated with large differences in phenotype, and still reflect
an ongoing probabilistic process. GNMs provide a way of mathematically specifying the rules of
network growth, and then testing whether, were those rules to unfold probabilistically over time,
they could capture observed complex neurobiological phenomena. This allows for the integration
of computational modelling with system-level neuroimaging.

Trends in Cognitive Sciences, Month 2023, Vol. xx, No. xx 7


Trends in Cognitive Sciences
OPEN ACCESS

Homophily: a guiding principle of the topological organisation of the brain?


The finding that the homophily principle achieves the best model fits has been a consistent finding
from the few examples in the literature, across both structural (e.g., diffusion tensor imaging; DTI)
and functional (e.g., fMRI) connectomes [75–78,84]. However, what is homophily, and how
does it reflect underlying biological principles? Homophily is simply defined as a preference for
similarity [88]. In the context of networks, similarity is typically computed in terms of overlapping
network topology, such as common neighbours or overlapping connection patterns, such that
regions that overlap in their connectivity are more likely to wire together (Figure 3A). It is of note
that, just as there is variation in precisely how the generative network model is defined, homophily
can also be defined in subtly different ways. For example, it can be defined at the region level

Trends in Cognitive Sciences

Figure 3. Homophily-based developmental wiring rules in neuronal networks. (A) Schematic of hypothetically
sparse connectivity development according to overlapping connectivity structure (i.e., homophily). Pink connections in the
early network (left) reflect overlaps in connectivity between two orange nodes. This drives subsequent wiring later on
(right). (B) Schematic of a functional microstructural graph reflecting functional connections between neurons inferred via
the high-density microelectrode array (MEA) at a spatial resolution of 17.5 μm. Functional connections are inferred by the
spike-time tiling coefficient (STTC) [99], which is a measure of pairwise correlated activity between detected spike trains of
individual neuronal activity. (C) Homophily has been shown to perform best performing generative models over longitudinal
development of these functional networks (shown here, 10, 12, and 14 days) in terms of the energy, which measures global
topological dissimilarity to the observed data [100]. (D) The topological fingerprint matrix relates to local node-wise statistic
correlations [100]. The 14-day homophily topological fingerprint matrices (as highlighted with the red box) are shown com-
pared with observed networks.

8 Trends in Cognitive Sciences, Month 2023, Vol. xx, No. xx


Trends in Cognitive Sciences
OPEN ACCESS

(e.g., similar cytoarchitecture [89], morphology [82], or gene expression [86]) rather than at the
topological level.

Why does the homophily principle explain macroscopic network organisation so well? One inter-
pretation is that homophily is, conceptually, a macroscopic manifestation of Hebbian learning
[75,76,90–93] but for the connectivity of the network rather than the weights of the connections
(‘those that fire together, wire together’) and may reflect microstructural connectivity played out
on a macroscopic scale. Where two hitherto unconnected nodes share multiple neighbours
this will necessarily mean that their ongoing patterns of activity are more similar; as a result,
they are more likely to connect in the future. In short, the more similar the neighbourhood, the
more correlated the patterns of activity, and the more likely the future connection. If true, this
would imply an explanatory isomorphism across levels of analysis in which wiring principles are
conserved, indicative of a fractal self-similar patterning of brain organisation [94,95]. Importantly,
because homophily generative models model the formation of connections, rather than the
changing of weights within pre-existing connections, they do not deal with some of Hebbian
learning’s limitations (e.g., exploding weights or requiring input independence [96]). This notion
of self-similar wiring, across levels, can be tested empirically: if the homophily principle is con-
served across scales, we would expect generative models of network connectivity across dif-
fering levels of analysis to be consistent. The flexibility of GNMs means that they can be applied to
any number of networks, including microstructural functional networks, inferred via the spike tim-
ings of neuronal signals acquired via microelectrode arrays (MEAs). MEAs are neural interfaces
enabling the study of neuronal signalling by detecting extracellular electrical activity of neurons.
Recent developments in high-density MEAs [97] (which have a spatial resolution in the order of
micrometres) even allow for the activity of single neurons to be resolved through a process called
spike sorting [98]. Functional connectivity networks are typically computed as overlapping spike
times of neurons within a given timeframe [99]. The neurons that constitute these inferred net-
works can be derived from rodent primary neurons or human induced pluripotent stem cells
(iPSCs), which spontaneously fire and self-organise over days in vitro on the MEA plate [100]
(Figure 3B). Strikingly, both the global and local arrangement of statistical properties in the net-
work (termed its ‘topological fingerprint’) can be uniquely explained by the homophily principle
[100] (Figure 2C,D).

The wiring parameters of the homophily generative model also change over development. That is,
the parameter space moves from ‘spatial’ networks, in which connection probability is deter-
mined by the spatial proximity of the neurons, toward the incorporation of homophily information.
It has been noted that homophily could be confounded by autocorrelated time-series [79]. How-
ever, its ability within GNMs to simulate functional [75,100] and structural [76–78,84] organisation
reduces that possibility.

Despite these results indicating a conservation of homophily as an organising principle, they do


not necessarily explain why this type of organisation may have evolved; that is, why not conserve
a different generative mechanism across scales? One explanation is that homophily is simply an
epiphenomenon of any system in which there are only locally knowable interactions that can
occur between distributed nodes (e.g., neurons, neural populations, or cerebral cortices). This
is because homophily requires locally dependent information, which may be cheaper to compute
relative to more globally independent information [101]. To put this concretely, consider the per-
spective of a single node. Naturally, it is constrained by what knowledge is made available to it
(e.g., via chemical or electrical signalling) [102]. It is this information that guides its future actions.
This means that, if two nodes are distant in terms of space (giving little opportunity for signalling,
outside of endocrinology) and topology (due to little shared connectivity), it is unlikely they will act

Trends in Cognitive Sciences, Month 2023, Vol. xx, No. xx 9


Trends in Cognitive Sciences
OPEN ACCESS

as if they have knowledge of the other. In the case of wiring, this simply means that wiring must be
made in the context of what each node knows about the other, and itself. Homophily, and Hebb’s rule
more broadly, meet this criterion because they are explicitly defined with respect to themselves.
Under this view, the fact that network organisation across scales can be explained via this principle
is almost an inevitability. This resonates with canonical observations that topographical maps exhibit
a very strong preponderance for local connectivity, with highly shared connectivity, alongside longer-
range interareal connections [103], thought to introduce diversity among the neighbours of different
brain areas [104]. Indeed, homophily is also one of the most pervasive and robust tendencies of social
networks, which are under analogous constraints to the brain [88,105].

Computational models that behave


We have outlined a model designed to capture self-organisation of networks within a
biophysically grounded system. In this sense, they capture embrainment at different scales, either
at the microstructural [100,106,107] or whole-brain level [75,76,85]. However, these models are
not currently neuroconstructivist. Although they incorporate biophysical constraints, we are only
capturing the presumed neural organisation necessary for neuroconstructivism, rather than the
process itself. This is because these generative models are not themselves learning to behave.
Certainly, we can correlate model parameters with cognitive performance (Figure 4A;
e.g., [76,78]), but this is not the same as a model that can itself learn to perform tasks. In this
sense, we have lost something relative to previous connectionist models.

In essence, to date, we have two contrasting approaches that come at the challenge of bridging
cognitive development and systems neuroscience from opposite ends of that bridge.

Trends in Cognitive Sciences

Figure 4. Bridging network models under change and cognition. (A) The generative network model (GNM) is an
example of an unsupervised learning model, in that task information is not used to guide the changes in its topology. To
link to cognition, it is possible to associate the wiring parameters of the network with cognitive scores. (B) Models such as
the cascade–correlation neural network (CC, left) and spatially embedded recurrent neural network (seRNN, right) are
supervised in that they use feedback to learn to solve a problem. CCs add new neurons to the network, which improve
performance by learning what the errors of the network are as its performance plateaus. This leads to step changes in the
performance of the task over time. By contrast, seRNNs are embedded in a 3D space with communication limitations that
are biologically inspired. One example finding from such constraints is that neurons become specialised to task-relevant
information in a way that is spatially configured. Based on [53,75,83,108].

10 Trends in Cognitive Sciences, Month 2023, Vol. xx, No. xx


Trends in Cognitive Sciences
OPEN ACCESS

Connectionist models (e.g., CC and KBCC architectures) are loosely based on neurobiological
principles as they learn to perform (Figure 4B, left). By contrast, generative models, are specifically
trained to simulate neurobiological processes by implementing biophysical constraints, but in the
absence of learning any behavioural goal. To fully embrace a neuroconstructivist perspective, we
must capture both the developmental mechanisms that guide emerging neurobiological organi-
sation, and, crucially, the acquisition of cognitive and behavioural skills.

One recent promising example of a model that merges biophysical principles with task-solving is
the spatially embedded recurrent neural network (seRNN; Figure 4B, right) [108]. Similar to CCs,
seRNNs learn to solve tasks (e.g., sequential inference [108]). However, simultaneously seRNNs
also adhere to basic biologically inspired constraints that are analogous to the GNM. First,
seRNNs exist in a 3D space, where connection weights are penalised as a function of their
lengths [109,110]. Second, resembling principles of neural signalling, seRNNs neurons are limited
in their communication via their direct and indirect connections (rather than having global knowl-
edge of all other weights in the network) [111–113]. All of these constraints are placed within a
single optimisation problem that is learned, rather than treating structure and function as separate
incommensurable entities that are associated with each other but not directly linked. These sim-
ple constraints mean that seRNNs develop numerous canonical connectomics findings over the
course of learning. They produce modular structures [114,115] and a small-world network
[116,117], each of which have been postulated to be critical computational properties of biolog-
ical networks [118–120]. They can also be simulated by the very same homophily wiring rules that
approximate empirical findings described previously [75–77].

As we have stated, one of the most compelling features of seRNNs is their ability to bridge the
emergence of both their neural structure (i.e., embrainment) simultaneously with their capacity
to solve the task (i.e., constructivism). For example, seRNNs can be used to probe the spatial ex-
tent to how information is represented in the network, much as we would with human neuroim-
aging [121]. As they are trained, neurons within the seRNNs become increasingly specialised
to distinct aspects of task-related information (e.g., the goal or choice information in a one-step
inference task) and this functional specialisation becomes spatially patterned. seRNNs even
code task-related information using mixed selectivity [122,123]. The parameter range for which
these findings occur is also within a very narrow critical range, mirroring the homophilic generative
model energy landscape. This final example, seRNNs, provides just one way in which it may be
possible to design neural systems that capture biophysical constraints within a task-solving sys-
tem. In time, it may be possible to inform these constraints by individual-level systems neurobiol-
ogy to fully bridge the gap between developmental systems neuroscience and cognitive
development.

What makes a ‘good’ developmental systems model?


The biophysically grounded computational model we have described thus far provides just one
example of the computational formalisation of probabilistic neuroconstructivist principles. How-
ever, beyond the generic benefits of the computational formalisation of theory, we think that it
has some essential ingredients of a good computational developmental theory. We now illustrate
these with examples from the same generative framework already outlined, but we think they
should be hallmarks of any computational framework for developmental systems neuroscience.

Incorporating individual-level neurobiology and performance


Computational models, such as GNMs, CCs, and seRNNs, will be more or less useful to answer
neuroconstructivist questions regarding cognitive development, depending on the researcher’s
question. For example, GNMs are useful for understanding how the topology of the brain

Trends in Cognitive Sciences, Month 2023, Vol. xx, No. xx 11


Trends in Cognitive Sciences
OPEN ACCESS

(i.e., the existence or not of particular connections) provides the architecture for higher-order cog-
nition and may be shaped by various biological or environmental influences [84–86]. CCs may be
useful for showing how, incrementally, computational components are combined as cognitive de-
velopment proceeds [7]. seRNNs are well placed to examine how connections are strengthened
or weakened (i.e., weights changing) over time and/or over learning, directly integrating biophys-
ical constraints within a behaving system [108]. When using these various models, it is imperative
that researchers are specific in what empirical data they recapitulate (e.g., cognitive or neural) and
how the algorithms underpinning their model operate to achieve it. A good developmental model
that aims to capture neuroconstructivist principles will likely need to borrow numerous principles
highlighted by these models to integrate developmental systems neuroscience with cognitive
development.

Simple principles can produce complex real-world phenomena


One of the most compelling features of a good computational model is when complex phenom-
ena emerge from simple principles that are, on the face of it, unrelated. Indeed, part of the appeal
of the early connectionist models was that complex cognitive phenomena could emerge from
simple interactions between artificial neurons [41,43,53].

To continue the example of generative network modelling, the present model simply has two pa-
rameters, yielding a simple cost-value trade-off when forming connections probabilistically over
time. Despite this relatively simple trade-off, resulting networks have complex organisational char-
acteristics that mirror real biological networks. As a result, we can consider the complex topolog-
ical organisation in terms of these much simpler local economic negotiations. The ability for this
two-parameter model to recapitulate complex global phenomena, such as centrality distribu-
tions, is compelling. However, more compelling is its ability to recapitulate complex local phenom-
ena not at all related to how those networks were simulated. For instance, in microstructural
networks, these models recapitulate the local arrangement of these statistics within a particular
type of topological organisation (which we have described above). That is, the intricate modular
and hierarchical structure of the network could simply be an epiphenomenon of its generative
mechanism [100]. Indeed, for this reason, as we outline in Figure 5, there is some benefit to
parsimony in and of itself. If models produce complex features because, in some sense, those
features are already specified within the computational model, this is less compelling.

Trends in Cognitive Sciences

Figure 5. Trading off parsimony and complexity when incorporating biophysical constraints. As one
incorporates greater biophysical constraints into the generative model, the explanatory scope narrows and findings
become increasingly prescribed and detailed. We provide example generative model specifications that may make
models simpler (e.g., Euclidean space, because only the physical geometry is specified) or more complex (e.g., the
precise spatiotemporal ontogeny of wiring).

12 Trends in Cognitive Sciences, Month 2023, Vol. xx, No. xx


Trends in Cognitive Sciences
OPEN ACCESS

Importantly, the current instantiation of the GNM provides a good account of the emergence of
network organisation, but does not account for several observations important for
developmentalists. One big limitation is that current GNMs cannot accommodate weighted
changes. Likewise, they say nothing directly about the response selectivity of particular regions,
or how neocortical synapses may be pruned with time. In both cases, newer formalisations of
generative models will be required that incorporate weighted information and some level of
task-based information-processing into how they work. This would highlight to what extent re-
gional structures tune and competitively interact with each other over development, and deal
with differing developmental time-course dynamics depending on the neural data modelled
(e.g., grey-matter pruning [124,125], or incremental white-matter increases into adulthood
[125,126]).

Integration across levels of biology


A second important characteristic for a good computational model of developmental systems neu-
roscience is the extent to which it allows us to bridge levels of biology. It is compelling when models
optimised to explain phenomena at one level also explain emergent phenomena in different biolog-
ical data. A benefit of the generative modelling is that it can be interrogated in detail, for example, by
extracting the parameters reflected at the nodal level and their changes over time. This means that
we can characterise statistical properties of the nodes within individual connectomes as they form,
producing node-wise parameterised costs and homophily values. These parameters can be used
to leverage alternative independent datasets that capture biological processes occurring at different
levels of explanation. For example, they can be integrated with regional variation in gene expression
to identify convergent genes with expression profiles that significantly covary with either parameter
[76]. Likewise, the wiring parameters, the time courses of network formation, or emergent proper-
ties from the model itself can each be associated at an individual level with other phenotypic data,
such as cognitive or behavioural assessments [76,78].

As we have previously alluded to, another way to integrate across levels of biology is to directly
incorporate these levels into a single model (e.g., [84,86]). For example, improvements have
been shown in model fits by incorporating of genomic measures of homophily into the generative
model in the form of correlated gene expression [84]. This observation opens the door to the pos-
sibility that the same topological principle of self-similarity extends across levels of analysis. A fur-
ther avenue, beyond genomics, would be to investigate how well such generative models explain
the development of representational spaces with largely characteristic structures [127]
(e.g., temporal lobe connectivity and semantic representational networks). How may such ques-
tions be addressed? A current limitation is that this bridge between biological levels is not mech-
anistic, but a future possibility is that a generative framework could allow for hierarchically nested
computational processes, such that multiple levels of biology could be formalised and integrated
into a multilayered generative network model [128,129].

Bridging across species


Evolution shapes the brains of humans and other animals [130]. Thus, just as we can bridge differ-
ent levels of analysis within a species, we can also bridge across different species. Comparative
connectomics aims to examine the products of evolution in the context of brain networks. Specif-
ically, it aims to quantitatively delineate the similarities and differences between the brain networks
of humans and other species with reference to the phylogenetic tree, allowing us to uncover com-
mon features across all mammals versus features that are uniquely human. A good developmental
systems model should be applicable across the phylogenetic tree, enabling insights into to both uni-
versal principles across species and specific principles within species, establishing the ways brains
differ in terms of their developmental trajectory, as a function of a common ancestry [131].

Trends in Cognitive Sciences, Month 2023, Vol. xx, No. xx 13


Trends in Cognitive Sciences
OPEN ACCESS

Additionally, non-human animal research has the natural advantage of allowing us to embrace ex-
perimental methods to probe causal mechanisms, which is simply not possible in humans. Com-
bined with computational models, nonhuman animal research may be a particularly powerful tool
for elucidating causal influences of early-life adversity on brain structure adaptation. For example,
recent work [85] shows how early adversity may directly lead to heightened stochasticity, a core
developmental principle [132,133], in brain network formation, making such networks more ro-
bust to perturbation to facilitate effective responses to future threats. This highlights how a
good developmental model should coherently integrate developmental principles across species.
A promising example of a model that may be able to do this is the aforementioned seRNN. Given
that functional requirements and structural constraints are both specified within a single model, it
allows researchers to flexibly change which tasks agents must solve simultaneously, alongside
other structural constraints to its network. This allows for a broad range of possible organisms
to be modelled [108]. By probing the organisational functional gradients of the model, they may
be particularly useful within an evo-devo approach [134] in which developmental processes of dif-
ferent organisms provides evolutionary insight [135]. For example, to understand rostrocaudal
gradients of neuronal production [135] or adaptive synaptogenesis [136], researchers will need
to concurrently model the behaviour, development, and 3D topology; to do so, behaving artificial
agents could be particularly useful.

Elucidating developmental processes and time courses


As we have described, constructivist principles entail the central idea that, to understand the
functional role of neural circuits, we must understand the developmental history of those circuits
because this is what determines functional emergence. Therefore, a good developmental sys-
tems model should tackle, at its core, the temporal and stochastic nature of interactions between
each part of the system, over time. This is an intrinsically challenging aspect of developmental

Box 2. Stochasticity in development


One particular source of stochasticity is the iterative nature of the GNMs. For example, as connections are added so the
community structure within the network changes (new modules form and so on). If the chosen ‘values’ term of the gener-
ative model is sensitive to this community structure, then a relatively small number of connections can steer the trajectory of
a network. Likewise, the tuning of the parameters can change the stochasticity of the generative process, and this is critical
to specifying generative models because it directly shapes the variability of developmental outcomes. On the one hand, if a
model has no stochasticity, then development can be considered purely in terms of determined interactions between re-
gions where there are identical outcomes upon multiple runs. On the other hand, if the model is wholly stochastic, then
development is equivalent to completely random wiring. Therefore, the goal of a good developmental model must be to
balance determined versus stochastic effects within a ‘constrained developmental manifold’ (Figure I), which reflects the
expected dimensions along which typical neurodiversity can be expected to exist and be observed [76]. The constrained
element of this can be studied via the incorporation of known neurodevelopmental details into a model at biologically
meaningful time courses [79], such as in work with the nematode Caenorhabditis elegans connectome [150]. The devel-
opmental manifold can be studied via simulations of neurodiversity, such as research that demarcates the constrained
space of wiring parameters that children can exhibit [76]. A fruitful area of future study would be to explicitly detach any
‘determined’ versus ‘stochastic’ effects within the generative model and study their interaction and/or temporal dynamics.
This generative network modelling framework allows the researcher to explore how altering the generative properties of a
network, within a portion of the parameter space, will drive the variability of the outcome.

A generative model allows us to establish the mechanisms by which this may occur within synthetic networks, revealing
insight into the real thing. Segregation and integration happen as an emergent property of the continual economic renego-
tiation of ‘costs’ and ‘values’ in simulations. In the future, this developmental systems model could be ideal for testing more
specific hypotheses about temporal shifts in network statistics and developmental discontinuities in brain development
(e.g., so-called ‘sensitive periods’), as well as how these timings are influenced by the social and economic environment
of children [151]. To achieve this, a developmental model may be required to incorporate within-network longitudinal mea-
surements directly into its formalisation. With an appropriate data set, this could be achieved by fitting generative models
directly to the change of developing observations rather than fit to each observation individually. In total, only with a good
developmental systems model, which elucidates developmental processes and time courses, can we hope to make prog-
ress bridging mechanism with observation.

14 Trends in Cognitive Sciences, Month 2023, Vol. xx, No. xx


Trends in Cognitive Sciences
OPEN ACCESS

Trends in Cognitive Sciences

Figure I. Modelling phenotypic variation in developmental outcomes in silico. Within a generative model,
initialised developmental trajectories are probabilistic, such that, on two independent runs, with all else equal,
phenotypes will likely differ. The constrained developmental manifold denotes the range of possible outcomes that can
occur, the scope of which may be different depending on the parameters themselves. By running simulations multiple
times, it is possible to understand the extent to which the probabilistic nature of wiring may contribute toward the outcome.

systems neuroscience, because the data necessary takes years to collect. A good computational
model of developmental processes needs to go beyond explaining differences in end state, and
provide insight into the ongoing processes that produced that end state. For example, one of the
classical observations in developmental systems neuroscience is the integration and segregation
of networks over time. Variability in the time courses of these processes is significantly associated
with multiple neurodevelopmental conditions [22,137]. Despite being relatively parsimonious,
models such as the GNM have the capability to explore the space of complex phenotypes that
can occur over simulated time. This allows for the exciting possibility for researchers to explore
how stochasticity, intrinsic to highly complex developmental processes, influences outcomes
[138,139] (Box 2).

Considers the environment


So far, we have focussed heavily on modelling the emergence and self-organisation of brain or-
ganisation and whether or how this unfolds as cognitive processes emerge. Something that
has largely been ignored, both here and elsewhere in the literature, is the role of the proximal en-
vironment within neuroconstructivism [8,18–20]. We know of only one study that explicitly tested
how early life environment can shape generative properties within a computational model [85].
Unpredictable stress alters generative network parameters creating more stochastic networks,
which may be an adaptive developmental response to the environment. However, even this is
treating the environment purely as something that happens to an individual. In reality, just as

Trends in Cognitive Sciences, Month 2023, Vol. xx, No. xx 15


Trends in Cognitive Sciences
OPEN ACCESS

neural systems are constructed through active use, so the environment is constructed, at least in Outstanding questions
part, by the individual. This is especially the case for the social environment, which is a What are the family of generative
particularly important influence on neurodevelopment [140,141]. Ultimately computational network models that can simulate the
formation of various aspects of brain
neuroconstructivist accounts will need to consider the environmental niche in which an individ-
organisation? What a priori
ual is developing, and their role in constructing that environment. It is hard to consider how current information is needed by these
models might do this, but one possibility is that we will need multilevel models that consider intra- models to deliver accurate
individual neuroconstructivism as a nested process within a wider, interindividual, constructive simulations?
process.
Will different models best simulate
different aspects of brain
Concluding remarks organisation? One hypothesis is that
Neurocognitive development is not determined, but proceeds probabilistically. As functional pro- models driven by genomic co-
expression may best simulate
cesses emerge, they themselves start to shape the emergence of complex networks. We outline unimodal cortex, but those incorporat-
a means of capturing this process with a biophysically grounded computational framework. This ing economical negotiations may best
approach has the potential to allow us to integrate across scales, timeframes and species, build- fit trans-modal cortex.
ing an empirical consensus on the mechanisms that shape the development of complex neuro-
How can these models capture the
biological systems (see Outstanding questions). In time, this could be integrated with task- development of neural structures
solving architectures to capture neuroconstructivist processes. outside cortex? Different brain regions
with different evolutionary histories,
and differentiated structures, may
Acknowledgements
have different generative properties.
D.A. acknowledges support from the Medical Research Council Doctoral Training Programme and Cambridge Trust Vice
This distinction has hitherto not been
Chancellor’s Award Scholarship. D.E.A. is supported by Medical Research Council Programme Grant MC-A0606-5PQ41, made within generative network
and by the Gnodde Goldman Sachs endowed Professorship in Neuroinformatics awarded to the University of Cambridge. modelling.
Both D.E.A. and D.A. are supported by The James S. McDonnell Foundation Opportunity Award. M.H.J. acknowledges sup-
port from Medical Research Council Programme Grant MR/T003057/1. Will different modelling constraints be
needed to accurately simulate
developmental processes at different
Declaration of interests
phases of development? One
None declared by authors. hypothesis is that genetically
constrained simulations may best
Resources capture early developmental
i processes, whereas those shaped by
www.researchgate.net/publication/229002509_Knowledge-based_Learning_with_KBCC
ii
experience-dependent interactions
http://socrates.acadiau.ca/courses/comp/dsilver/Share/2005Conf/NIPS2005_ITWS/Website/Papers/ITWS02- may best capture later developmental
KBCCRivestShultz.pdf processes.

References To what extent are homophilic


1. Benton, D.T. (2022) The elusive ‘Developmental Mechanism’: what 11. Gao, W. et al. (2019) A review on neuroimaging studies of ge- principles mirrored across complex
they are and how to study and test them. Dev. Rev. 65, 101034 netic and environmental influences on early brain development. systems in their development? Are
2. Benton, D.T. (2022) ‘But what is the mechanism?’: demystifying Neuroimage 185, 802–812 such principles useful in bridging
the ever elusive ‘developmental mechanism’. Infant Child Dev. 12. Astle, D.E. and Fletcher-Watson, S. (2020) Beyond the core-
explanations across levels of analysis?
Published online June 20, 2022. https://doi.org/10.1002/icd.2355 deficit hypothesis in developmental disorders. Curr. Dir.
3. Johnson, M.H. and De Haan, M. (2015) Developmental Cogni- Psychol. Sci. 29, 431–437 For example, can homophily
tive Neuroscience: An Introduction (4th edn), Wiley 13. Cicchetti, D. and Rogosch, F.A. (1996) Equifinality and multifinality in principles explain cognitive networks,
4. Mareschal, D. et al. (2007) Neuroconstructivism - I: How the developmental psychopathology. Dev. Psychopathol. 8, 597–600 such as semantic representation
Brain Constructs Cognition, Oxford University Press 14. Resnik, D.B. and Vorhaus, D.B. (2006) Genetic modification networks?
5. Johnson, M.H. (2000) Functional brain development in infants: and genetic determinism. Philos. Ethics Humanit. Med. 1, 9
elements of an interactive specialization framework. Child Dev. 15. Waggoner, M.R. and Uller, T. (2015) Epigenetic Determinism in
Current instantiations of GNMs
71, 75–81 Science and Society. New Genet. Soc. 34, 177–195
6. Johnson, M.H. (2011) Interactive specialization: a domain- 16. Waddington, C.H. (1957) The Strategy of the Genes: A Discus- construct networks in an
general framework for human functional brain development? sion of Some Aspects of Theoretical Biology. With an Appendix unsupervised fashion, in which
Dev. Cogn. Neurosci. 1, 7–21 by H. Kacser, Allen & Unwin networks self-organise in absence of
7. Shultz, T.R. (2003) Computational Developmental Psychology, 17. Kendler, K.S. and Greenspan, R.J. (2006) The nature of genetic any required functioning. How can
MIT Press influences on behavior: lessons from ‘simpler’ organisms. AJP
functioning, and embodiment within a
8. Karmiloff-Smith, A. (1998) Development itself is the key to un- 163, 1683–1694
derstanding developmental disorders. Trends Cogn. Sci. 2, 18. Gottlieb, G. (1998) Normally occurring environmental and be-
world, be incorporated into develop-
389–398 havioral influences on gene activity: from central dogma to mental models?
9. Westermann, G. et al. (2006) Modeling developmental cognitive probabilistic epigenesis. Psychol. Rev. 105, 792–802
neuroscience. Trends Cogn. Sci. 10, 227–232 19. Gottlieb, G. (2007) Probabilistic epigenesis. Dev. Sci. 10, 1–11 Neuroconstructivism posits that
10. Rylaarsdam, L. and Guemez-Gamboa, A. (2019) Genetic 20. Gottlieb, G. (2005) Probabilistic epigenesis of development. In multilevel interactions between
causes and modifiers of autism spectrum disorder. Front. Handbook of Developmental Psychology (Valsiner, J. and
genomics, environment, and ontology
Cell. Neurosci. 13, 385 Connelly, K.J., eds), pp. 3–17, Sage

16 Trends in Cognitive Sciences, Month 2023, Vol. xx, No. xx


Trends in Cognitive Sciences
OPEN ACCESS

21. Fair, D.A. et al. (2007) Development of distinct control networks 47. Oliver, A. et al. (2000) Deviations in the emergence of represen- together construct higher-order brain
through segregation and integration. Proc. Natl. Acad. Sci. U. tations: a neuroconstructivist framework for analysing develop- organisation. Are multilayer generative
S. A. 104, 13507–13512 mental disorders. Dev. Sci. 3, 1–23
models required to model these inter-
22. Jones, J.S. et al. (2022) Segregation and integration of the func- 48. Mareschal, D. and Thomas, M.S.C. (2007) Computational
tional connectome in neurodevelopmentally ‘at risk’ children. modeling in developmental psychology. IEEE Trans. Evol.
actions?
Dev. Sci. 25, e13209 Comput. 11, 137–150
23. Wig, G.S. (2017) Segregated systems of human brain net- 49. Quinlan, P.T., ed (2003) Connectionist Models of Development: GNMs are probabilistic in how they
works. Trends Cogn. Sci. 21, 981–996 Developmental Processes in Real and Artificial Neural Net- construct network organisation.
24. Sporns, O. (2013) Network attributes for segregation and inte- works, Psychology Press What, if any, are the interactions
gration in the human brain. Curr. Opin. Neurobiol. 23, 162–171 50. Munakata, Y. and McClelland, J.L. (2003) Connectionist models
between parameter changes and the
25. Durston, S. et al. (2006) A shift from diffuse to focal cortical ac- of development. Dev. Sci. 6, 413–429
tivity with development. Dev. Sci. 9, 1–8 51. Li, P. et al. (2004) Early lexical development in a self-organizing stochasticity introduced into the
26. Siugzdaite, R. et al. (2020) Transdiagnostic brain mapping in neural network. Neural Netw. 17, 1345–1362 network formation?
developmental disorders. Curr. Biol. 30, 1245–1257 52. Westermann, G. and Ruh, N. (2012) A neuroconstructivist
27. Kievit, R.A. et al. (2014) Distinct aspects of frontal lobe structure model of past tense development and processing. Psychol. Can generative frameworks be
mediate age-related differences in fluid intelligence and multi- Rev. 119, 649–667 combined with other computational
tasking. Nat. Commun. 5, 5658 53. Fahlman, S. and Lebiere, C. (1990) The cascade-correlation
models to create a hierarchically
28. Battista, C. et al. (2018) Mechanisms of interactive specializa- learning architecture. Adv. Neural Inf. Proces. Syst. 2, 524–532
tion and emergence of functional brain circuits supporting cog- 54. Fahlman, S.E. (1990) The recurrent cascade-correlation archi- nested representation of different
nitive development in children. NPJ Sci. Learn. 3, 1–11 tecture. Adv. Neural Inf. Proces. Syst. 3, 190–196 biological processes?
29. Joseph, J.E. et al. (2011) Progressive and regressive develop- 55. Titone, D. and Shultz, T.R. (2005) Simulating frontotemporal
mental changes in neural substrates for face processing: testing pathways involved in lexical ambiguity resolution. In Proceed-
specific predictions of the Interactive Specialization account. ings of the Twenty-seventh Annual Conference of the Cognitive
Dev. Sci. 14, 227–241 Science Society, pp. 2178–2183, Erlbaum
30. Kriegeskorte, N. and Douglas, P.K. (2018) Cognitive computa- 56. Mayr, A. et al. (2014) The evolution of boosting algorithms.
tional neuroscience. Nat. Neurosci. 21, 1148–1160 From machine learning to statistical modelling. Methods Inf.
31. Ritter, P. et al. (2013) The virtual brain integrates computational Med. 53, 419–427
modeling and multimodal neuroimaging. Brain Connect. 3, 57. Baluja, S. and Fahlman, S.E. (1994) Reducing network depth in
121–145 the cascade-correlation learning architecture. Semantic Scholar
32. Hassabis, D. et al. (2017) Neuroscience-inspired artificial intelli- Published online October 17, 1994. http://dx.doi.org/10.
gence. Neuron 95, 245–258 21236/ada289352
33. Murray, J.D. et al. (2018) Biophysical modeling of large-scale 58. Shultz, T.R. and Rivest, F. (2003) Knowledge-based cascade-
brain dynamics and applications for computational psychiatry. correlation: varying the size and shape of relevant prior knowl-
Biol. Psychiatry Cogn. Neurosci. Neuroimaging 3, 777–787 edge. In New Developments in Psychometrics (Yanai, H. et
34. Huys, Q.J.M. et al. (2016) Computational psychiatry as a bridge al., eds), pp. 631–638, Springer
from neuroscience to clinical applications. Nat. Neurosci. 19, 59. Shultz, T.R. and Rivest, F. (2001) Knowledge-based cascade-
404–413 correlation: using knowledge to speed learning. Connect. Sci.
35. Adams, R.A. et al. (2016) Computational psychiatry: towards a 13, 43–72
mathematically informed understanding of mental illness. 60. Egri, L. and Shultz, T.R. (2006) A Compositional Neural-network
J. Neurol. Neurosurg. Psychiatry 87, 53–63 Solution to Prime-number Testing. In Proceedings of the Annual
36. Khaligh-Razavi, S.-M. and Kriegeskorte, N. (2014) Deep super- Meeting of the Cognitive Science Society (28)
vised, but not unsupervised, models may explain IT cortical rep- 61. Shultz, T.R. (2017) Constructive artificial neural-network models
resentation. PLoS Comput. Biol. 10, e1003915 for cognitive development. In New Perspectives on Human De-
37. Symmonds, M. et al. (2018) Ion channels in EEG: isolating velopment (Budwig, N. et al., eds), pp. 15–26, Cambridge Uni-
channel dysfunction in NMDA receptor antibody encephalitis. versity Press
Brain 141, 1691–1702 62. Hupkes, D. et al. (2020) Compositionality decomposed: how do
38. Wang, J.X. et al. (2018) Prefrontal cortex as a meta- neural networks generalise? arXiv Published online August 22,
reinforcement learning system. Nat. Neurosci. 21, 860–868 2019. http://dx.doi.org/10.48550/arXiv.1908.08351
39. Popper, K.R. (1959) The Logic of Scientific Discovery, Basic 63. Ju, H. et al. (2017) Limited synapse overproduction can speed
Books development but sometimes with long-term energy and dis-
40. Davis, H. and Anderson, M. (1999) Individual differences and crimination penalties. PLoS Comput. Biol. 13, e1005750
development - one dimension or two? In The Development of 64. Baxter, R.A. and Levy, W.B. (2020) Constructing multilayered
Intelligence (Anderson, M., ed.), pp. 161–191, Psychology neural networks with sparse, data-driven connectivity using
Press biologically-inspired, complementary, homeostatic mecha-
41. Thomas, M. and Karmiloff-Smith, A. (2003) Connectionist nisms. Neural Netw. 122, 68–93
models of development, developmental disorders, and individ- 65. Yang, G. et al. (2009) Stably maintained dendritic spines are as-
ual differences. In Models of Intelligence: International Perspec- sociated with lifelong memories. Nature 462, 920–924
tives (Sternberg, R.J. et al., eds), pp. 133–150, American 66. Chklovskii, D.B. et al. (2004) Cortical rewiring and information
Psychological Association storage. Nature 431, 782–788
42. Mareschal, D. et al. (1999) A computational and neuropsycho- 67. Graham, B.P. and van Ooyen, A. (2006) Mathematical model-
logical account of object‐oriented behaviours in infancy. Dev. ling and numerical simulation of the morphological development
Sci. 2, 306–317 of neurons. BMC Neurosci. 7, S9
43. Mareschal, D. and Shultz, T.R. (1996) Generative connectionist 68. Fusi, S. et al. (2005) Cascade models of synaptically stored
networks and constructivist cognitive development. Cogn. Dev. memories. Neuron 45, 599–611
11, 571–603 69. Chechik, G. et al. (1998) Synaptic pruning in development: a
44. Harm, M.W. and Seidenberg, M.S. (1999) Phonology, reading computational account. Neural Comput. 10, 1759–1777
acquisition, and dyslexia: insights from connectionist models. 70. Navlakha, S. et al. (2015) Decreasing-rate pruning optimizes the
Psychol. Rev. 106, 491–528 construction of efficient and robust distributed networks. PLoS
45. Thomas, M. and Karmiloff-Smith, A. (2002) Are developmental Comput. Biol. 11, e1004347
disorders like cases of adult brain damage? Implications from 71. Roccasalvo, I.M. et al. (2015) A hybrid computational model to pre-
connectionist modelling. Behav. Brain Sci. 25, 727–750 discus- dict chemotactic guidance of growth cones. Sci. Rep. 5, 11340
sion 750–787 72. van Ooyen, A. (2003) Modeling Neural Development, The MIT
46. Rumelhart, D.E. et al. (1986) Parallel Distributed Processing: Ex- Press
plorations in the Microstructure of Cognition: Foundations, 1, A 73. Blohm, G. et al. (2020) A how-to-model guide for neuroscience.
Bradford Book eNeuro 7 ENEURO.0352-19.2019

Trends in Cognitive Sciences, Month 2023, Vol. xx, No. xx 17


Trends in Cognitive Sciences
OPEN ACCESS

74. De Schutter, E. (2001) Computational neuroscience: more math 100. Akarca, D. et al. (2022) Homophilic wiring principles underpin
is needed to understand the human brain. In Mathematics Un- neuronal network topology in vitro. bioRxiv Published online
limited — 2001 and Beyond (Engquist, B. and Schmid, W., March 10, 2022. https://doi.org/10.1101/2022.03.09.483605
eds), pp. 381–391, Springer 101. Avena-Koenigsberger, A. et al. (2019) A spectrum of routing strate-
75. Vértes, P.E. et al. (2012) Simple models of human brain gies for brain networks. PLoS Comput. Biol. 15, e1006833
functional networks. Proc. Natl. Acad. Sci. U. S. A. 109, 102. Hassan, B.A. and Hiesinger, P.R. (2015) Beyond molecular
5868–5873 codes: simple rules to wire complex brains. Cell 163, 285–291
76. Akarca, D. et al. (2021) A generative network model of 103. Sereno, M.I. et al. (2022) Topological maps and brain computa-
neurodevelopmental diversity in structural brain organization. tions from low to high. Front. Syst. Neurosci. 16, 787737
Nat. Commun. 12, 4216 104. Betzel, R.F. and Bassett, D.S. (2018) Specificity and robustness of
77. Betzel, R.F. et al. (2016) Generative models of the human long-distance connections in weighted, interareal connectomes.
connectome. Neuroimage 124, 1054–1064 Proc. Natl. Acad. Sci. U. S. A. 115, E4880–E4889
78. Zhang, X. et al. (2021) Generative network models of altered 105. Lazarsfeld, P. and Merton, R. (1964) Friendship as Social pro-
structural brain connectivity in schizophrenia. Neuroimage cess: a substantive and methodological analysis. In Freedom
225, 117510 and Control in Modern Society (Berger, M. et al., eds), pp.
79. Betzel, R.F. and Bassett, D.S. (2017) Generative models for 18–66, Van Nostrand
network neuroscience: prospects and promise. J. R. Soc. Inter- 106. Perin, R. et al. (2011) A synaptic organizing principle for cortical neu-
face 14, 20170623 ronal groups. Proc. Natl. Acad. Sci. U. S. A. 108, 5419–5424
80. Chen, Y. et al. (2017) Features of spatial and functional segre- 107. Song, S. et al. (2005) Highly nonrandom features of synaptic
gation and integration of the primate connectome revealed by connectivity in local cortical circuits. PLoS Biol. 3, e68
trade-off between wiring cost and efficiency. PLoS Comput. 108. Achterberg, J. et al. (2022) Spatially-embedded recurrent neural
Biol. 13, e1005776 networks reveal widespread links between structural and func-
81. Liu, X. et al. (2020) A generative network model of the human tional neuroscience findings. bioRxiv Published online Novem-
brain normal aging process. Symmetry 12, 91 ber 18, 2022. https://doi.org/10.1101/2022.11.17.516914
82. Morgan, S.E. et al. (2018) Low-dimensional morphospace of to- 109. Lewis, J.D. et al. (2009) The relation between connection length
pological motifs in human fMRI brain networks. Netw. Neurosci. and degree of connectivity in young adults: a DTI analysis.
02, 285–302 Cereb. Cortex 19, 554–562
83. Kaiser, M. and Hilgetag, C.C. (2004) Modelling the development 110. Kaiser, M. and Hilgetag, C.C. (2006) Nonoptimal component
of cortical systems networks. Neurocomputing 58–60, placement, but short processing paths, due to long-distance
297–302 projections in neural systems. PLoS Comput. Biol. 2, e95
84. Oldham, S. et al. (2022) Modeling spatial, developmental, phys- 111. Seguin, C. et al. (2022) Network communication models narrow
iological, and topological constraints on human brain connectiv- the gap between the modular organization of structural and
ity. Sci. Adv. 8, eabm6127 functional brain networks. NeuroImage 257, 119323
85. Carozza, S. et al. (2022) Early adversity changes the economic 112. Seguin, C. et al. (2018) Navigation of brain networks. Proc. Natl.
conditions of structural brain network organisation. bioRxiv Acad. Sci. U. S. A. 115, 6297–6302
Published online June 11, 2022. https://doi.org/10.1101/ 113. Andreotti, J. et al. (2014) Validation of network communicability met-
2022.06.08.495303 rics for the analysis of brain structural networks. PLoS ONE 9,
86. Arnatkevic iūtė, A. et al. (2021) Genetic influences on hub con- e115503
nectivity of the human connectome. Nat. Commun. 12, 4237 114. Meunier, D. et al. (2010) Modular and hierarchically modular or-
87. Bullmore, E. and Sporns, O. (2012) The economy of brain net- ganization of brain networks. Front. Neurosci. 4, 200
work organization. Nat. Rev. Neurosci. 13, 336–349 115. Betzel, R.F. et al. (2017) The modular organization of human an-
88. McPherson, M. et al. (2001) Birds of a feather: homophily in so- atomical brain networks: accounting for the cost of wiring.
cial networks. Annu. Rev. Sociol. 27, 415–444 Netw. Neurosci. 1, 42–68
89. Goulas, A. et al. (2016) Cytoarchitectonic similarity is a wiring 116. Watts, D.J. and Strogatz, S.H. (1998) Collective dynamics of
principle of the human connectome. bioRxiv Published online ‘small-world’ networks. Nature 393, 440–442
August 6, 2016. https://doi.org/10.1101/068254 117. Bassett, D.S. and Bullmore, E.T. (2017) Small-world brain net-
90. Hebb, D.O. (1949) The Organization of Behavior; A Neuropsy- works revisited. Neuroscientist 23, 499–516
chological Theory, Wiley 118. Park, H.-J. and Friston, K. (2013) Structural and functional brain
91. Goulas, A. et al. (2015) The strength of weak connections in the networks: from connections to cognition. Science 342, 1238411
macaque cortico-cortical network. Brain Struct. Funct. 220, 119. Zamora-López, G. et al. (2010) Cortical hubs form a module for
2939–2951 multisensory integration on top of the hierarchy of cortical net-
92. Goulas, A. et al. (2019) Spatiotemporal ontogeny of brain wir- works. Front. Neuroinform. 4, 1
ing. Sci. Adv. 5, eaav9694 120. Sporns, O. and Zwi, J.D. (2004) The small world of the cerebral
93. Vértes, P.E. et al. (2014) Generative models of rich clubs in cortex. Neuroinform 2, 145–162
Hebbian neuronal networks and large-scale human brain net- 121. Tong, F. and Pratte, M.S. (2012) Decoding patterns of human
works. Philos. Trans. R. Soc. Lond. Ser. B Biol. Sci. 369, brain activity. Annu. Rev. Psychol. 63, 483–509
20130531 122. Rigotti, M. et al. (2013) The importance of mixed selectivity in
94. Werner, G. (2010) Fractals in the nervous system: conceptual complex cognitive tasks. Nature 497, 585–590
implications for theoretical neuroscience. Front. Physiol. 1, 15 123. Fusi, S. et al. (2016) Why neurons mix: high dimensionality for
95. Zheng, M. et al. (2020) Geometric renormalization unravels self- higher cognition. Curr. Opin. Neurobiol. 37, 66–74
similarity of the multiscale human connectome. Proc. Natl. 124. Quartz, S.R. and Sejnowski, T.J. (1997) The neural basis of
Acad. Sci. U. S. A. 117, 20244–20253 cognitive development: a constructivist manifesto. Behav.
96. Melchior, J. and Wiskott, L. (2019) Hebbian-Descent. arXiv Brain Sci. 20, 537–556 discussion 556–596
Published online May 25, 2019. http://dx.doi.org/10.48550/ 125. Lebel, C. and Deoni, S. (2018) The development of brain white
arXiv.1905.10585 matter microstructure. Neuroimage 182, 207–218
97. Müller, J. et al. (2015) High-resolution CMOS MEA platform to 126. Bonetto, G. et al. (2021) Myelin: a gatekeeper of activity-
study neurons at subcellular, cellular, and network levels. Lab dependent circuit plasticity? Science 374, eaba6905
Chip 15, 2767–2780 127. Lambon Ralph, M.A. (2014) Neurocognitive insights on con-
98. Pachitariu, M. et al. (2016) Fast and accurate spike sorting of ceptual knowledge and its breakdown. Philos. Trans. R. Soc.
high-channel count probes with KiloSort. Adv. Neural Inf. Lond. Ser. B Biol. Sci. 369, 20120392
Proces. Syst. 29, 4455–4463 128. Bazzi, M. et al. (2020) A framework for the construction of gen-
99. Cutts, C.S. and Eglen, S.J. (2014) Detecting pairwise correla- erative models for mesoscale structure in multilayer networks.
tions in spike trains: an objective comparison of methods and Phys. Rev. Research 2, 023100
application to the study of retinal waves. J. Neurosci. 34, 129. Betzel, R.F. and Bassett, D.S. (2017) Multi-scale brain net-
14288–14303 works. NeuroImage 160, 73–83

18 Trends in Cognitive Sciences, Month 2023, Vol. xx, No. xx


Trends in Cognitive Sciences
OPEN ACCESS

130. Cisek, P. et al. (2022) Neuroscience needs evolution. Philos. 140. Andrews, J.L. et al. (2021) Navigating the social environment in
Trans. R. Soc. 377, 20200518 adolescence: the role of social brain development. Biol. Psychi-
131. Finlay, B.L. et al. (2001) Developmental structure in brain evolu- atry 89, 109–118
tion. Behav. Brain Sci. 24, 263–278 141. McEwen, B.S. (2012) Brain on stress: how the social environ-
132. Vogt, G. (2015) Stochastic developmental variation, an epige- ment gets under the skin. Proc. Natl. Acad. Sci. U. S. A. 109,
netic source of phenotypic diversity with far-reaching biological 17180–17185
consequences. J. Biosci. 40, 159–204 142. Rumelhart, D.E. et al. (1986) Learning representations by back-
133. Hiesinger, P.R. and Hassan, B.A. (2018) The evolution of variability propagating errors. Nature 323, 533–536
and robustness in neural development. Trends Neurosci. 41, 143. Lecun, Y. et al. (1998) Gradient-based learning applied to doc-
577–586 ument recognition. Proc. IEEE 86, 2278–2324
134. Jablonka, E. and Lamb, M.J. (2005) Evolution in Four Dimen- 144. Hochreiter, S. and Schmidhuber, J. (1997) Long short-term
sions: Genetic, Epigenetic, Behavioral, and Symbolic Variation memory. Neural Comput. 9, 1735–1780
in the History of Life, MIT Press 145. Albert, R. and Barabási, A.-L. (2002) Statistical mechanics of
135. Finlay, B.L. and Uchiyama, R. (2015) Developmental mechanisms complex networks. Rev. Mod. Phys. 74, 47–97
channeling cortical evolution. Trends Neurosci. 38, 69–76 146. Erdős, P. and Rényi, A. (1959) On random graphs. I. Publ.
136. Thomas, B.T. et al. (2015) Adaptive synaptogenesis constructs neu- Math. 6, 290–297
ral codes that benefit discrimination. PLoS Comput. Biol. 11, 147. Holland, P.W. et al. (1983) Stochastic blockmodels: first steps.
e1004299 Soc. Networks 5, 109–137
137. Lord, L.-D. et al. (2017) Understanding principles of integration and 148. Hinton, G.E. (2007) Learning multiple layers of representation.
segregation using whole-brain computational connectomics: impli- Trends Cogn. Sci. 11, 428–434
cations for neuropsychiatric disorders. Philos. Trans. R. Soc. A 149. Friston, K. et al. (2006) A free energy principle for the brain.
Math. Phys. Eng. Sci. 375, 20160283 J. Physiol. 100, 70–87
138. Kilfoil, M.L. et al. (2009) Stochastic variation: from single cells to 150. Nicosia, V. et al. (2013) Phase transition in the economically
superorganisms. HFSP J. 3, 379–385 modeled growth of a cellular nervous system. Proc. Natl.
139. Zernicka-Goetz, M. and Huang, S. (2010) Stochasticity versus Acad. Sci. U. S. A. 110, 7880–7885
determinism in development: a false dichotomy? Nat. Rev. 151. Tooley, U.A. et al. (2021) Environmental influences on the pace
Genet. 11, 743–744 of brain development. Nat. Rev. Neurosci. 22, 372–384

Trends in Cognitive Sciences, Month 2023, Vol. xx, No. xx 19

You might also like