Neural Manifold Analysis of Brain Circuit Dynamics

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 24

Springer Nature 2021 LATEX template

Neural manifold analysis of brain circuit dynamics in health


and disease
arXiv:2203.11874v1 [q-bio.NC] 22 Mar 2022

Rufus Mitchell-Heggs1,2*, Seigfred Prado1,3† , Giuseppe P. Gava1† , Mary Ann Go1


and Simon R. Schultz1*
1* Department of Bioengineering and Centre for Neurotechnology, Imperial College
London, London, SW7 2AZ, United Kingdom.
2 Centre for Discovery Brain Sciences, The University of Edinburgh, Edinburgh, EH8

9XD, United Kingdom.


3 Department of Electronics Engineering, University of Santo Tomas, Manila, Philippines.

*Corresponding author(s). E-mail(s): rmitch11@ed.ac.uk; s.schultz@imperial.ac.uk;


† These authors contributed equally to this work.

Abstract
Recent developments in experimental neuroscience make it possible to simultaneously record the activ-
ity of thousands of neurons. However, the development of analysis approaches for such large-scale
neural recordings have been slower than those applicable to single-cell experiments. One approach
that has gained recent popularity is neural manifold learning. This approach takes advantage of the
fact that often, even though neural datasets may be very high dimensional, the dynamics of neural
activity tends to traverse a much lower-dimensional space. The topological structures formed by these
low-dimensional neural subspaces are referred to as “neural manifolds”, and may potentially provide
insight linking neural circuit dynamics with cognitive function and behavioural performance. In this
paper we review a number of linear and non-linear approaches to neural manifold learning, including
principal component analysis (PCA), multi-dimensional scaling (MDS), Isomap, locally linear embed-
ding (LLE), Laplacian eigenmaps (LEM), t-SNE, and uniform manifold approximation and projection
(UMAP). We set these methods within a common mathematical framework, and compare their advan-
tages and disadvantages with respect to their use for neural data analysis. We apply them to a number
of datasets from published literature, comparing the manifolds that result from their application to
hippocampal place cells, motor cortical neurons during a reaching task, and prefrontal cortical neurons
during a multi-behaviour task. We find that in many circumstances linear algorithms produce simi-
lar results to non-linear methods, although in particular in cases where the behavioural complexity is
greater, nonlinear methods tend to find lower dimensional manifolds, at the possible expense of inter-
pretability. We demonstrate that these methods are applicable to the study of neurological disorders
through simulation of a mouse model of Alzheimer’s Disease, and speculate that neural manifold anal-
ysis may help us to understand the circuit-level consequences of molecular and cellular neuropathology.

Keywords: Neural Manifolds, Manifold Learning, Neural Population Analysis, Dimensionality Reduction,
Neurological Disorders

1
Springer Nature 2021 LATEX template

1 Introduction together neurons with sufficiently highly corre-


lated activity during the same behaviors or in
While the investigation of single neurons has response to the same stimuli. This ignores infor-
undoubtedly told us much about brain function, it mation that is transmitted collectively and might
is uncertain whether individual neuron properties lead to (i) falsely concluding that a group of neu-
alone are sufficient for understanding the neuro- rons do not encode a behavioral variable (when
biological basis of behaviour (Pang et al, 2016). in fact they encode it collectively), (ii) incorrectly
In some cases, trial-averaging of single-neuron estimating the amount of information that is being
responses may lead to confusing or mislead- encoded, and/or (iii) miss important mechanisms
ing interpretation of true biological mechanisms that contribute to encoding (Frost et al, 2021).
(Sanger and Kalaska, 2014; Cunningham and Yu, Alternatively, artificial neural networks have
2014). Additionally, single-neuron activities stud- been increasingly employed, either by (i) using a
ied in higher-level brain areas involved in cognitive goal-driven neural network and using the embed-
tasks (Machens et al, 2010; Laurent, 2002; Church- ding to compare and predict the population activ-
land et al, 2010) are highly heterogeneous both ity in different brain regions (Russo et al, 2018;
across neurons and across experimental conditions Mante et al, 2013; Jazayeri and Ostojic, 2021),
even for nominally identical trials. And finally, or (ii) modelling the activity of individual neu-
it may well be that task-relevant information is rons as emanating from a feedforward or recurrent
represented in patterns of activity across multi- neural network architecture (Elsayed et al, 2016;
ple neurons, above and beyond what is observable Rajan et al, 2016). Whilst these methods can
at the single neuron level. Unfortunately, char- present powerful ways of inferring neural states,
acterising such patterns, in the worst case, may some issues have been raised on their biological
require measurement of an exponential number of interpretability (Pandarinath et al, 2018).
parameters (the “curse of dimensionality”). Addressing these shortcomings, a range of
However, it appears that in many circum- techniques that come under the umbrella of neural
stances, patterns of neural population activity manifold learning (NML), have been developed.
may be described in terms of far fewer population- The neural manifold is, in brief, a low-dimensional
level features than either the number of possible surface within the higher-dimensional space of
patterns, or even the number of neurons observed neural activity which explains the majority of the
(Churchland et al, 2012; Mante et al, 2013; Chaud- variance of the neural dynamics (Figure 1). Neu-
huri et al, 2019; Gallego et al, 2020; Nieh et al, ral manifold learning algorithms are algorithms
2021; Stringer et al, 2019). This underpins a for efficiently extracting a description of such a
paradigm shift in the studies of neural systems surface from a sample of multivariate neural activ-
from single-neuron analysis to a more comprehen- ity. Here we review how neural manifold learn-
sive analysis that integrates single-neuron models ing can be employed to extract low-dimensional
with neural population analysis. descriptions of the internal structure of ensemble
If, as appears to be the case, the spatiotempo- activity patterns, and used to obtain insight into
ral dynamics of brain activity is low dimensional, how interconnected populations of neurons rep-
or at least much lower dimensional than the pat- resent and process information. Such techniques
tern space, then it stands to reason that such have been applied to neural population activity
activity can be characterised without falling afoul in a variety of different animals, brain regions
of the curse of dimensionality, and on reasonable and during distinct network states. In this paper
experimental timescales. Indeed, in recent years, we compare a variety of neural manifold learn-
numerous techniques have been developed to do ing algorithms with several datasets from multi-
just that. For instance, classification algorithms electrode array electrophysiology and multiphoton
have been applied to neuronal ensembles to pre- calcium imaging experiments, to assess their rel-
dict aspects of behaviour (Rigotti et al, 2013; Fusi ative merits in gaining insight into the recorded
et al, 2016; Rust, 2014; Raposo et al, 2014). One neural dynamics and the computations it may be
problem with this is that the common practice underpinning.
is to identify “neuronal ensembles” by grouping
Springer Nature 2021 LATEX template

We envisage that neural manifold learning may 2 Neural Manifold Learning


not only facilitate accurate quantitative charac-
terizations of neural dynamics in healthy states, Neural manifold learning (NML) describes a sub-
but also in complex nervous system disorders. set of machine learning algorithms that take a high
Anatomical and functional brain networks may dimensional neural activity matrix X comprised
be constructed and analysed from neuroimaging of the activity of N neurons at T time points
data in describing and predicting the clinical syn- and embed it into a lower dimensional matrix Y
dromes that result from neuropathology. NML can while preserving some aspects of the information
offer theoretical insight into progressive neurode- content of the original matrix X - e.g. mapping
generation, neuropsychological dysfunction, and nearby points in the neural activity space X to
potential anatomical targets for different thera- nearby points in Y (see Figure 1) (Cunningham
peutic interventions. For example, investigating and Yu, 2014; Churchland et al, 2012; Meshulam
neural populations in the medial prefrontal cor- et al, 2017; Mante et al, 2013; Harvey et al, 2012;
tex that are active during social situations while Wu et al, 2017). When projected into this lower
encoding social decisions may enable hypothesis dimensional space, the set of neural activity pat-
testing for disorders such as Autistic Spectrum terns observed are typically constrained within a
Disorder (ASD) or Schizophrenia (Irimia et al, topological structure, or manifold, Y, which might
2018; Kingsbury et al, 2019). have a globally curved geometry but a locally lin-
In this review, we introduce NML as a method- ear one. This has led to the popular descriptive
ology for neural population analysis and showcase term “the neural manifold”. For instance, if the
its application to the analysis of different types reduced dimensionality embedding matrix Y is
of neural data with differing behavioural com- three-dimensional (as often depicted on a 2D page
plexity both in healthy and disease model states. for convenience of illustration), the neural mani-
For selected algorithms, we visualise how they fold Y might describe a closed surface within that
represent neural activity in lower dimensional 3D space. This approach has found widespread
embeddings and evaluate them on their ability recent use across neuroscientific studies (Figure 2),
to discriminate between neural states and recon- including for understanding neural mechanisms
struct neural data. We aim to offer the reader during speech (Bouchard et al, 2013), decision-
the prospect of selecting with confidence the type making in prefrontal cortex (Mante et al, 2013;
of NML method that works best for a particular Harvey et al, 2012; Briggman et al, 2005; Stokes
type of neural data, as well as an appreciation of et al, 2013), movement preparation and execu-
how NML can be leveraged as a powerful tool in tion in the motor cortices (Churchland et al, 2012;
deciphering more precisely the basis of different Kaufman et al, 2014; Yu et al, 2009; Feulner and
cognitive impairments and brain disorders. Clopath, 2021; Gallego et al, 2017).

Fig. 1 Schematic showing a typical example of how a manifold learning algorithm may reduce the dimensionality of a
high dimensional neural population time series to produce a more interpretable low dimensional representation. A high
dimensional neural population activity matrix, X, with N neurons and T time points, is projected into a lower dimensional
manifold space and the trajectory visualised in the space formed by the first 3 dimensions, c1, c2 and c3.
Springer Nature 2021 LATEX template

Manifold learning algorithms Notation Description


N Number of neurons
There are several types of manifold learning algo- T Number of time samples
t Sample timestamp
rithms that can generally be divided into linear
X Population activity matrix
and nonlinear approaches. Although they have xt Network activity state at timestamp t
similar goals, they may differ in the way they k Number of dimensions in the embedding
transform the data and in the type of statistical Y Manifold embedding matrix
Y Manifold - a topological structure within Y
structures or properties they capture and preserve.
D Dissimilarity matrix (MDS)
It is important to select the type of method most G Graph of population activity
suitable for the type of neural data being anal- Λ Graph Laplacian (LEM)
ysed, as this may have significant impact on the ∆ Diagonal matrix (LEM)
Ω Adjacency matrix (LEM)
resulting scientific interpretation.
Table 1 Summary of mathematical notation used
In this section, we describe some of the more throughout this manuscript.
common manifold learning methods in use in
the neuroscience literature. To make a fair and
informative comparison of each algorithm, we common mathematical terminology throughout;
implemented them and applied them each to a this is summarised in Table 1.
number of different neural datasets, as described
in Section 3. In order to facilitate comparison of
the assumptions made by the different algorithms,
we have attempted to adopt (where possible) a

Fig. 2 Neural manifolds across different species and brain regions. (a) Population activity in the rat head-direction circuit
(Chaudhuri et al, 2019). (i) During waking, the network activity directly maps onto the head orientation of the animal. (ii)
Comparison between population activity during waking (dark blue) and nREM sleep (mustard yellow); the latter does not
follow the same 1D waking dynamics. (b) Population activity in the mouse hippocampus during an evidence accumulation
and decision making task in virtual reality (Nieh et al, 2021). Task-relevant variables such as position (i) and accumulated
evidence (ii) are encoded in the manifold. (c) The motor cortical population activity underpinning a reaching task in
monkeys is stable over days and years (Gallego et al, 2020). (d) Prefrontal cortical population activity in macaque monkeys
during a visual discrimination task spans a low-dimensional space. Task-relevant variables such as the dots’ direction and
strength of motion, colour, and the monkey’s choice are encoded in the manifold (Mante et al, 2013). (e) Population activity
in the mouse primary visual cortex in response to gratings of different orientations, indicated by color (Stringer et al, 2021).
The panel is adapted from Jazayeri and Ostojic (2021)
Springer Nature 2021 LATEX template

Linear methods population. To minimise the effects of spiking vari-


ability, trial averaging or temporal smoothing of
Linear manifold learning is accomplished by per-
spike counts is usually done prior to performing
forming linear transformations of the observed
PCA. However, this may not always be applicable
variables that preserve certain optimality proper-
to all types of analyses.
ties, which yield low-dimensional embeddings that
are easy to interpret biologically. Some of the most
common linear manifold learning techniques are 2.2 Multidimensional Scaling
discussed below. (MDS)
Classical multidimensional scaling is another lin-
2.1 Principal Component Analysis ear dimensionality reduction technique that aims
(PCA) to find a low-dimensional map of a number of
objects (e.g. neural states) while preserving as
One of the most common linear manifold learning
much as possible pairwise distances between them
methods is Principal Component Analysis (PCA)
in the high dimensional space (Kruskal and Wish,
(Jolliffe, 2002; Jackson, 2005; Ivosev et al, 2008). It
1978; Venna and Kaski, 2006; Yin, 2008; France
reduces the dimensionality of large datasets while
and Carroll, 2010).
preserving as much variability (or statistical infor-
In the case of neural data, given an N × T
mation) as possible. To obtain a neural manifold
activity matrix, X, a T × T distance matrix, D, is
Y, PCA performs an eigendecomposition on the
first obtained by measuring the distance between
covariance matrix of the neural activity matrix X
the population activity vectors for all t using a
in order to find uncorrelated latent variables that
dissimilarity metric (Krauss et al, 2018). Exam-
are constructed as linear combinations of the indi-
ples of such metrics include both the Euclidean
vidual neurons’ contributions, while successively
distance and the cosine dissimilarity. The latter
maximising the variance. The computed eigenvec-
(which we will employ for the MDS embeddings
tors, or principal components (PCs), represent the
shown throughout this paper) is, for a pair of
directions of the axes that capture the most vari-
vectors a,b separated in angle by θ is
ance, and the corresponding eigenvalues yield the
amount of variance carried by each PC. a.b
Although PCA has been used to great effect Da,b = cos(θ) = 1 − .
kakkbk
to reduce the dimensionality of high dimensional
neural data (Churchland et al, 2010; Mazor and The dissimilarity matrix D is formed from the
Laurent, 2005; Gao and Ganguli, 2015; Ahrens cosine dissimilarities between the neural activ-
et al, 2012), its caveat is that it captures all types ity patterns ct at each time t for all pairs of
of variance in the data, including spiking variabil- times measured. A lower-dimensional mapping,
ity, which may obscure the interpretation of latent Y ∈ <k×T , where k << N , is then found by min-
variables. For example, neurons with higher fir- imising a loss function, called the strain, so that
ing rates typically exhibit higher variance (spike the mapped inter-point distances are as close as
count variance being proportional to mean spike possible to the original distances in D. From the
count for cortical neurons Tolhurst et al (1983)), eigen-decomposition of the distance matrix D, the
and therefore may skew the orthogonal direc- MDS components (i.e. dimensions) are revealed by
tions found by PCA by accounting mostly for the eigenvectors, which are a linear combination
highly active neurons. Additionally, cortical neu- of the distances across the T population activity
rons respond with variable strength to repeated vectors, while their respective eigenvalues report
presentations of identical stimuli (Tolhurst et al, the amount of variance explained.
1983; Shadlen and Newsome, 1998; Cohen and Multidimensional scaling has been used to
Kohn, 2011). This variability is often shared study changes in neural population dynamics in
among neurons, and such correlations in trial-to- both large-scale neural simulations (Phoka et al,
trial responses can have a substantial effect on 2012) and in neural population recordings (Luczak
the amount of information encoded by a neuronal et al, 2009). It has been applied for characteris-
ing glomerular activity across the olfactory bulb in
Springer Nature 2021 LATEX template

predicting odorant quality perceptions (Youngen- points i, j (i.e. network states xi , xj ) by measuring
tob et al, 2006), integrative cerebral cortical mech- their shortest path length dG (i, j) in the weighted
anisms during viewing (Tzagarakis et al, 2009), graph G using an algorithm such as Dijkstra’s
neuroplasticity in the processing of pitch dimen- (Dijkstra et al, 1959).
sions (Chandrasekaran et al, 2007), emotional MDS is then applied to the matrix of shortest
responses to music in patients with temporal path lengths within the graph DG = {dG (i, j)} to
lobe lesions (Dellacherie et al, 2011), and struc- yield an embedding of the data in a k-dimensional
tural brain abnormalities associated with autism Euclidean space Y that best preserves the man-
spectrum disorder (Irimia et al, 2018). ifold’s estimated geometry. The quality of the
reconstructed manifold Y depends greatly on the
Nonlinear methods size of the neighbourhood search and the distance
metric used to build G. Isomap is a conserva-
The algorithms described above can only extract tive approach that seems well suited to tackle the
linear (or approximately so) structure. Non-linear non-linearity inherent to neural dynamics and it
techniques, on the other hand, aim to uncover the has in fact been used in a variety of studies, even
broader non-linear structure of the neural popula- just for visualisation purposes (Mimica et al, 2018;
tion activity matrix X. These insights, can come Chaudhuri et al, 2019; Sun et al, 2019).
at the expense of weaker biological interpreta-
tion, as the discovered manifold is not given by
2.4 Locally Linear Embedding
a linear combination of some observed variable
(e.g. individual neurons). Non-linear algorithms (LLE)
often attempt to approximate the true topology LLE is another non-linear manifold learning tech-
of the manifold Y within the reduced dimension- nique that attempts to preserve the local geometry
ality representation Y by finding population-wide of the high-dimensional data X, by separately
variables that discard the relationship between analysing the neighbourhood of each network
faraway points (or neural states) on X and state xt , assuming it to be locally linear even
focus instead on conserving the distances between if the global manifold structure is non-linear
neighbours. (Roweis and Saul, 2000). The local neighbour-
hood of each network state is estimated using
2.3 Isomap the steps described in Section 2.3 and is con-
nected together to form a weighted graph G.
One of the most commonly used nonlinear man-
The location of any node i in the graph corre-
ifold learning algorithms is Isomap (Tenenbaum
sponds to the network state xi , which can then be
et al, 2000). Non-linear techniques aim to uncover
described as a linear combination of the location
the broader non-linear structure of the neural
of its neighbouring nodes xj , xk , .... These contri-
manifold embedding of the neural population
butions are summarised by a set of weights W,
activity, X, by approximating the true topology
which are optimised to minimise the reconstruc-
of the neural manifold, Y. To do so, Isomap first
tion error between the high-dimensional network
embeds X into a weighted graph G, whose nodes
state xi and the neighbourhood linear estimation.
represent the activity of the neuronal population
The weight wij summarizes the contribution of
at a given time xt , and the edges between them
the j th network state to the reconstruction of the
represent links to network states xi , xj , ... that are
ith state. To map the high-dimensional dataset X
the most similar to xt , i.e., its neighbours.
onto a lower dimensional one Y, the same local
The k-nearest neighbours algorithm is usually
geometry characterised by W is employed to rep-
used to estimate the neighbours for all network
resent the low-dimensional network state yi as a
states x1 , ..., xT . These neighbouring relations are
function of its neighbours yj , yk , ....
then encoded in a weighted graph with edges
A significant extension of LLE was introduced
dG (i, j) between neighbouring states i, j that
that makes use of multiple linearly independent
depends on the distance metric dX used. G can
weight vectors for each neighbourhood. This leads
then be used to approximate the true geodesic dis-
to a Modified LLE (MLLE) algorithm that is much
tances on the manifold dY (i, j) between any two
more stable than the original (Zhang and Wang,
Springer Nature 2021 LATEX template

2007). Unlike Isomap, LLE preserves only the local is a variation of Stochastic Neighbour Embedding
geometry of the high-dimensional data X, repre- (SNE) that aims to reduce the tendency to crowd
sented by the neighbourhood relationship, so that points together in the centre of the map. SNE
short high dimensional distances are mapped to converts the high dimensional Euclidean distances
short distances on the low dimensional projection. between data points into conditional probabilities
Isomap, instead, aims to preserve the geometry of that represent similarities and finds maps with a
the data at all scales, long distances included, pos- better global organisation. This method has been
sibly introducing distortions if the topology of the applied in multiple neuroscientific settings (Dimi-
manifold is not estimated well. For neural data, triadis et al, 2018; Panta et al, 2016) and usually
though, Isomap has generally been preferred for considers up to three embedding dimensions, as
its theoretical underpinnings and more intuitive this is the limit to exploit the Barnes-Hut approx-
approach. imation, which reduces the computational cost to
O(N log N ) from O(N 2 ) and is often the choice in
2.5 Laplacian Eigenmaps (LEM) many applications (Van Der Maaten, 2014).

LEM, also referred to as Spectral Embedding,


2.7 Uniform Manifold
is another non-linear technique similar to LLE
(Belkin, 2003). The algorithm is geometrically Approximation and Projection
motivated as it exploits the properties of the (UMAP)
neighbouring graph G generated from the high UMAP is a nonlinear manifold learning technique
dimensional data X, as in Section 2.3, to obtain that is constructed from a theoretical framework
a lower dimensional embedding Y that optimally based on topological data analysis, Riemannian
preserves the neighbourhood information of X. geometry and algebraic topology (McInnes et al,
In the graph G, any two connected nodes (net- 2018) and it is also been used on neural data
work states) i and j are connected by a weighted both as a NML method (Tombaz et al, 2020)
edge using a heat kernel with decay constant t, and for broader dimensionality reduction purposes
parameterising the exponential relationship of the Lee et al (2021). It builds upon the mathematical
weights with respect to the distance between the foundations of LEM, Isomap and other nonlin-
nodes in the high dimensional space xi − xj . Oth- ear manifold learning techniques in that it uses a
erwise t is set to ∞ leading to a binary graph. k nearest neighbours weighted graphs representa-
The resulting graph edges form the adjacency tion of the data. By using manifold approximation
matrix Ω that is used, together with the diago- and patching together local fuzzy simplicial set
nal matrix ∆ containing the degree of each node representations, a topological representation of
of G, to obtain the graph Laplacian Λ = ∆ − Ω, the high dimensional data is constructed. This
which yields many spectral properties of G and layout of the data is then optimised in a low
theoretically ensures an optimal embedding. The dimensional space to minimize the cross-entropy
k dimensions of the manifold embedding Y are between the two topological representations. Com-
the eigenvectors of Λ. Similar to LLE, Laplacian pared to t-SNE, UMAP is competitive with it in
eigenmaps preserve only the local geometry of the terms of visualization quality and arguably pre-
neural population activity X and are therefore serves more of the global dataset structure with
more robust to the construction of G. Indeed LEM superior run time performance. Furthermore the
has been successfully used to unveil behaviourally algorithm is able to scale to significantly larger
relevant neural dynamics (Rubin et al, 2019; Sun data set sizes than are feasible for t-SNE.
et al, 2019). UMAP and t-SNE have recently also being
employed for visualising high dimensional
2.6 t-distributed Stochastic genomics data and some distortion issues have
Neighbour Embedding (t-SNE) been raised (Chari et al, 2021). Although this
problem is particularly apparent with t-SNE
t-SNE is another non-linear technique that creates
embedding, which tends to completely disregard
a single map that reveals structure at many differ-
the global structure of the data, any dimension-
ent scales (Van der Maaten and Hinton, 2008). It
ality reduction to few dimensions will inherently
Springer Nature 2021 LATEX template

distort the topology of the high-dimensional data, of the manifold topological structure (Chaudhuri
as shown by the Johnson-Lindenstrauss Lemma et al, 2019). The field is currently very active,
(Johnson and Lindenstrauss, 1984). These criti- and we are sure that by the time of publication,
cisms have been made with particular reference additional algorithms will exist.
to genomics datasets, which are intrinsically
higher dimensional than the neural datasets 3 Manifold learning for the
which manifold learning has been applied to. In
fact, we have relatively evidence that a relatively analysis of large-scale
small number of latent dimensions are enough to neural datasets
drive a neural population activity and explain
its variability across a range of circumstances To demonstrate how neural manifold learning
(Chaudhuri et al, 2019; Rubin et al, 2019; Nieh can be used in the analysis of large-scale neural
et al, 2021; Churchland et al, 2012; Gallego et al, datasets, we apply a number of linear and non-
2020), even though there are instances where a linear NML algorithms (PCA, MDS, LEM, LLE,
low-dimensional neural activity have been argued Isomap, t-SNE and UMAP) to several datasets.
against (Stringer et al, 2019; Humphries, 2020). The datasets were chosen to cover a number of
different brain regions and a range of behavioural
complexity. They consist of (i) two-photon cal-
NML algorithms for trial-structured
cium imaging of hippocampal subfield CA1 in a
datasets mouse running along a circular track (Section 3.1),
The NML algorithms described and visualised up taken from Go et al (2021); (ii) multi-electrode
until now have been general use case algorithms, array extracellular electrophysiological recordings
used to infer neural correlates from the data. How- from the motor cortex of a macaque perform-
ever, in experiments where specific behaviours or ing a radial arm goal-directed reach task (Section
decisions are time-locked and run across multiple 3.2) from Yu et al (2007); and (iii) single-photon
trials, some of the NMLs described above have ”mini-scope” calcium imaging data recorded from
been augmented and optimised. These include, the prefrontal cortex of a mouse under conditions
demixed Principle Component Analysis (dPCA) where multiple task-relevant behavioural variables
(Kobak et al, 2016), Tensor Component Analy- were monitored (Section 3.3), from Rubin et al
sis (TCA) (Cohen and Maunsell, 2010; Niell and (2019). Lastly, we illustrate how manifold learning
Stryker, 2010; Peters et al, 2014; Driscoll et al, can be employed to characterise brain dynamics
2017), Cross-Validated PCA (cvPCA) (Stringer in a disease states such as Alzheimer’s disease by
et al, 2019) and model-based Targeted Dimen- applying these techniques to data simulated to
sionality Reduction (mTDR) (Aoi and Pillow, reproduce basic aspects of dataset (i), augmented
2018). These NML algorithms exploit the trial to incorporate pathology (Section 3.4).
nature of an experiment to discriminate signal
from trial-to-trial variability or noise, enabling the Decoding from neural manifolds
experimenter to identify the principle components
To compare NML algorithms we evaluated the
that maximally correspond to a stimulus or action.
resulting manifolds according to behavioural
decoding (reconstruction or classification) perfor-
Other NML Algorithms mance, ability to encode the high dimensional
Manifold Inference from Neural Dynamics activity (i.e reconstruction score) and intrinsic
(MIND) is a recently developed NML algorithm dimensionality. For hippocampal CA1 manifolds
that aims to characterize the neural population obtained by any of the NML methods, we com-
activity as a trajectory on a nonlinear manifold, puted decoding accuracy for the behavioral vari-
defined by possible network states and temporal able(s) as a function of the number of man-
dynamics between them (Low et al, 2018; Nieh ifold embedding dimensions using an Optimal
et al, 2021). Another powerful NML algorithm is Linear Estimator (OLE) (Warland et al, 1997).
Spline Parameterization for Unsupervised Decod- This allows assessment of the number of dimen-
ing (SPUD), which stems from a rigorous analysis sions necessary to encode the behavioral variable.
Springer Nature 2021 LATEX template

We used a 10-fold cross-validation approach, i.e., (latent) variables necessary to describe the neu-
training the decoder on 90% of the data and ral activity (Jazayeri and Ostojic, 2021). We
testing it on the remaining 10%. Decoding per- acknowledge that any measure of dimensional-
formance is calculated as the Pearson correlation ity is strongly influenced by the timescale of the
coefficient between the actual and reconstructed neural activity and by the size of the popula-
behavioural variable, i.e. the mouse position, for tion recorded (Humphries, 2020), but in this work
the test data. To assess neural manifold infor- we are using the intrinsic dimensionality mea-
mation provided about animal behaviour in the sure just to compare the topologies captured by
other two datasets, we built a logistic regression the different NML methods. To infer the mani-
classifier (Hosmer et al, 1989); we evaluate its per- fold intrinsic dimensionality we employ a method
formance using the F1 score as a function of the related to the correlation dimension of a dataset
number of manifold embedding dimensions used. (Grassberger and Procaccia, 1983). After apply-
The F1 score is defined as the weighted average of ing NML to the original high dimensional neural
the precision (i.e., percentage of the results which activity, for any given point (i.e., neural state)
are relevant) and recall (i.e., percentage of the in the low dimensional space, we calculated the
total relevant results correctly classified by the number of neighbouring neural states within a
algorithm), and ranges between 0 (worst) and 1 sphere surrounding it as a function of the sphere’s
(best performance) (Blair, 1979). radius. The slope of the number of neighboring
data points within a given radius on a log–log scale
Reconstruction of neural activity equals the exponent k in the power law N (r) =
from a low dimensional embedding crk , where N is the number of neighbours and r is
the sphere radius. k then provides an estimate of
Another way to evaluate the degree of fidelity of the intrinsic dimensionality of the neural activity.
the manifold embedding is to attempt to recon- We fit the power law in the range between around
struct the high dimensional neural data from the 10 to 5000 neighbours or neural states, aiming to
low dimensional embedding. This tells us how capture the relevant temporal scale for each task
much has been lost in the process of dimen- and related manifold (Rubin et al, 2019; Nieh et al,
sionality reduction. To obtain such a reverse 2021). Note, in fact, that we have selected a par-
mapping, we employed the non-parametric regres- ticular range for each dataset, also depending on
sion method originally introduced for LLE (Low the number of time samples T available (details in
et al, 2018; Nieh et al, 2021). We then obtained respective figure captions).
the reconstruction similarity by computing the
Pearson correlation coefficient between the recon- 3.1 Hippocampal neural manifolds
structed and the original neural activity. To
perform an element-wise comparison, the N × The hippocampus is well known to be involved
T neural activity matrices were concatenated in memory formation and spatio-contextual rep-
column-wise into a single vector and the correla- resentations (Scoville and Milner, 1957; O’Keefe
tion coefficient calculated. To obtain the neural and Conway, 1978; Morris, 2006). NML has been
activity reconstruction score we employed a 10- recently applied to hippocampal neural activity
fold cross-validation strategy. Using 90% of the by several authors, suggesting that the rodent
data from each session to learn the reverse map- hippocampal activity encodes various contextual,
ping, the reconstruction was then evaluated on the task-relevant variables, displaying more complex
remaining 10% the data. The final score was then information processing than spatial tuning alone
obtained by averaging across folds. (Rubin et al, 2019; Nieh et al, 2021). Here we
reanalyse published data from a dataset compris-
Intrinsic manifold dimensionality ing two-photon calcium imaging of hippocampal
CA1 place cells, to which previously only MDS
The last measure we use to characterise neural had been applied Go et al (2021) in order to com-
manifolds is their intrinsic dimensionality, which pare manifolds extracted by different algorithms.
should account for the number of independent We examine how different NML methods charac-
terise the dynamics of hippocampal CA1 neurons
Springer Nature 2021 LATEX template

along trajectories in low-dimensional manifolds reconstruction score suggests that the highly non-
as they coordinate during the retrieval of spatial linear tSNE and UMAP algorithms yield more
memories. informative low-dimensional embeddings.
The data was recorded from a head-fixed
mouse navigating a circular track in a cham- 3.2 Motor cortical manifolds
ber floating on an air-pressurised table under a
two-photon microscope (Figure 3a). The mouse’s NML and dimensionality reduction techniques
position (Figure 3b) was simultaneously tracked have also been applied to neural activity within
using a magnetic tracker. The activity of 30 of 225 motor and pre-motor cortical areas, in particular
hippocampal CA1 cells recorded in the shown ses- to suggest that the high variability observed in
sion is depicted in Figure 3c. 92 of the 225 cells the single-neuron responses is disregarded when
were classified as place cells by Go et al (2021), and population dynamics are taken into account (San-
their normalised activity rate map, sorted accord- thanam et al, 2009; Churchland and Shenoy, 2007;
ing to place preference, is shown in Figure 3d. Churchland et al, 2012; Sussillo et al, 2015; Gal-
Employing the activity of all 225 cells (both place lego et al, 2017). This approach has been the foun-
and non place selective), both linear (Figure 3e) dation of the ”computation through dynamics”
and nonlinear (Figure 3f) NML methods, revealed framework, which aims to characterise the neural
a cyclic pattern of transitions between network population dynamics as trajectories along mani-
states corresponding to different locations along folds (Vyas et al, 2020). Rotational-like dynamics
the circular track. The manifold formed by the of motor cortex neural activity have been observed
dynamics of neural activity as the mice explored both in human and non-human primates (Church-
the full track forms a complete representation land et al, 2012; Pandarinath et al, 2015), with
of the 2D structure of the track. We compared stability over long periods of time (Gallego et al,
the algorithms in terms of decoding performance 2020). Importantly, by furthering the understand-
(Figure 3g), neural activity reconstruction score ing of neural population dynamics and variability,
(Figure 3h) and intrinsic manifold dimensional- crucial steps can be made in improving perfor-
ity (Figure 3i). All algorithms performed simi- mance of prosthetic devices that can be used to
larly in terms of the metrics considered, yielding further enable those with nervous-system disease
almost the best possible decoding performance or injury in day to day tasks (Santhanam et al,
with just one manifold dimension and the best 2009).
possible reconstruction similarity with two dimen- We applied NML to data from Yu et al (2007)
sions. Moreover, all manifolds were found to have and Chestek et al (2007) recorded in the caudal
a similar intrinsic dimensionality of around 2. premotor cortex (PMd) and rostral primary motor
In this example, the behavioural complexity cortex (M1) of rhesus macaques during a radial
is approximately one dimensional (i.e., the mouse arm goal-directed reaching task (Figure 4a). Neu-
running in a single direction along a circular track ral data was collected for 800 successful repeats,
can be mapped onto the single circular variable with 100 trials for each of the 8 reaching direc-
θ) and all NML methods produce embeddings tions (Figure 4b). We used NML to analyse the
which allow high decoding performance, with each neural activity during the 100 ms time window
algorithm already reaching near-maximum perfor- before the onset of the reaching movement and
mance after incorporating only the first manifold investigated its tuning with respect to the reaching
dimension. This suggests that if the behavioural direction (Santhanam et al, 2009). The mani-
complexity is low and its information is broadly fold embeddings obtained using different NML
encoded within the neural population, any NML methods, both linear (Figure 4e) and nonlinear
algorithm will yield broadly similar results. How- (Figure 4f), revealed different types of structures
ever, as we will see, this does not necessarily with data points clustering according to the mon-
hold when complexity of the behavioural task key’s target endpoint. All the NML algorithms
is increased. In terms of the ability to capture tested revealed lower-dimensional structures that
the neural activity variance, the neural activity discriminate between each of the behavioural
states along a single dimension. We used the
pre-movement neural manifold to classify the
Springer Nature 2021 LATEX template

11

a   
100 cm
0 1.0
0 cm

Normalised activtity rate


50
0.8

Angular position (θ)


150 0.6

Cell ID
θ
0.4
250
20 sec
0.2
350
92 0.0
e Linear f Non-Linear 10cm

(i) ISOMAP (ii) LLE (iii) LEM


(i) PCA C2

C3

C3

C2

C3

C3

C2

C3

C3
C2

C3

C3

C1 C2 C1 C1 C2 C1 C1 C2 C1
C1 C2 C1

C3

C3

C3
C3

C2

C1

C2

C2
C1 C1
C2

C1

(ii) MDS (iv) t-SNE (v) UMAP


C2

C3

C3

C2

C3

C3

C2

C3

C3
C1 C2 C1 C1 C2 C1 C1 C2 C1
C3

C3

C3
C1
C2
C2

C1 C1
C2

  
R econstruction similarity [r ]

PCA 1.0 0.8 


Decoding performance [r]

MDS 
0.8
  

0.6 
ISO 0.6 
LLE 0.4
0.4 
LEM 0.2 
0.2



t-SNE
0.0 0.0 
UMAP 1 2 3 4 5 1 2 3 4 5



 
 














Number of dimensions Number of dimensions




Fig. 3 Mouse hippocampal CA1 manifolds during spatial memory retrieval. Unless otherwise stated, panels
adapted with permission from Go et al (2021). (a) Schematic of experimental setup: head-fixed mouse navigates a floating
circular track under a two-photon microscope. Inset: Close-up view of head-fixed mouse on floating track. (b) Spatial
trajectories of the mouse, with θ (color-coded) denoting position on the track. (c) Top: Distance traversed around the track
(cm), Middle: Ca2+ transients for 30 (of 225) randomly selected cells, Bottom: Rastergram showing events detected from the
above calcium transients. Blue dots indicate time of Ca2+ transient onset, with dot area showing relative event amplitude
(normalized per cell) (d) Normalised neural activity rate map for 92 place cells, sorted by spatial tuning around the track.
(e - f ) Linear vs Nonlinear manifold embeddings for all 225 cells (normalised). In each case the first three dimensions are
visualised: Insets for each: projections on pairs of components: C1 and C2 (upper left), C2 and C3 (upper middle), C1 and
C3 (upper right). (e) Linear manifold embeddings (i) PCA, (ii) MDS, (f ) Nonlinear manifold embeddings (i) Isomap, (ii)
LLE, (iii) LEM, (iv) t-SNE, and (v) UMAP - (ii-v) were not reproduced from (Go et al, 2021). (g-i) Manifold evaluation
metrics (see: (g) Decoding performance - as used in (Go et al., 2020), (h) Neural activity reconstruction score, (i) Intrinsic
manifold dimensionality.
Springer Nature 2021 LATEX template

behaviour into one of eight different goal direc- performed the best in terms of behavioural classi-
tions. The two linear NML algorithms yielded fication and ability to reconstruct the high dimen-
the most behaviourally informative embedding, sional neural activity from the manifold embed-
requiring only two dimensions to achieve best per- ding, with an intrinsic dimensionality between 2
formance (Figure 4g). All algorithms performed and 3 inferring that behaviour is the predominant
equally in terms of neural activity reconstruction type of information encoded by the ACC network
similarity, with only one dimension being neces- during this task.
sary to reconstruct the original neural activity
patterns (Figure 4h). The intrinsic dimensionality 3.4 Analysis of neural manifolds in
of the two linear embeddings, on the other hand, neurological disorders
was the highest (Figure 4i). Non linear NML algo-
rithms extracted lower dimensional embedding at NML potentially provides a valuable tool for
the cost of encoding less behavioural information. understanding the biology of different brain disor-
ders. As an example, NML can be used to charac-
3.3 Prefrontal cortical manifolds terise changes in neural manifolds for spatial mem-
ory during the progression of Alzheimer’s disease
Increasingly, NML has been used to analyse neu- (AD). In AD, the hippocampus and connected cor-
ral circuits implicated in decision-making such tical structures are among the first areas to show
as the prefrontal cortex (PFC) (Kingsbury et al, pathophysiology. Hippocampal-dependent cogni-
2019; Mante et al, 2013) and the anterior cin- tive processes such as episodic memory are partic-
gulate cortex (ACC) (Rubin et al, 2019). With ularly and prominently affected at the behavioural
these multi-function brain regions, neurons are level. However, it is not yet understood how the
widely known for their mixed selectivity, often pathological markers of AD, such as amyloid-beta
capturing information relating to both stimuli fea- plaques and neurofibrillary tangles, lead to specific
tures and decisions (Kobak et al, 2016; Rigotti disruptions in the network-level information pro-
et al, 2013; Fusi et al, 2016). This moves away cessing functions underpinning memory function.
from the notion of highly tuned single cells and While single cell properties are obviously affected,
gives rise to a dynamical, population driven code it is believed that network properties relating to
(Pouget et al, 2000) ideally suited to NML meth- the coordination of information throughout brain
ods. This was nicely demonstrated by Rubin et al circuits involved in memory may be particularly
(2019), who visualised and quantified how NML at risk (Palop and Mucke, 2016). Neural manifold
(in their case LEM) could be used to discrim- learning analysis methods may thus play a useful
inate neural states arising from different brain or even crucial role in disentangling the effects of
regions (ACC and hippocampal area CA1) during these network alterations.
identical tasks. The neural activity was recorded The formation of extracellular amyloid plaques
in freely behaving mice exploring a linear track causes aberrant excitability in surrounding neu-
and performing various behaviors such as drink- rons (Busche et al, 2008): cells close to amy-
ing, running, turning, and rearing (Figure 5a-c). loid plaques tend to become hyperactive, whereas
Building on their findings, we employed various those further away from the plaques tend to
alternative NML methods, both linear (Figure 5d) become hypoactive (or relatively silent). This
and nonlinear (Figure 5e), that revealed differ- aberrant neuronal excitability could potentially
ent neural data structures, exhibiting clustering have a substantial effect on the neural manifold,
according to the animals’ behaviour, with PCA and conversely the neural manifold may provide a
and LLE producing the least clustered visualisa- way to assess the overall impact of network abnor-
tions. Evaluation of NML behaviour classification malities. To demonstrate this, we generated simu-
performance (Figure 5f), reconstruction similarity lated data based on the data from (Busche et al,
(Figure 5g) and manifold intrinsic dimensional- 2008) showing a population of hippocampal CA1
ity (Figure 5i) revealed that although there was cells (with 50% normal cells, 21% hyperactive cells
high variability in performance, non-linear algo- and 29% hypoactive cells) was re-simulated and
rithms generally out-performed linear. Notably, modified based on the experiment conducted in
two non-linear NML algorithms t-SNE and UMAP (Go et al, 2021). Hypersynchrony, which has also
Springer Nature 2021 LATEX template

13

a b Reaching Angle /180π c d


110 70
0 0

Normalised firing rate


150 30 1.0
0.75

Mult-units
Mult-units
190 350 0.5
0.25

310 0
230
98 98
5s 1 Trials 8

e Linear f Non-Linear
(i) PCA (i) ISOMAP (ii) LLE (iii) LEM

C2

C3

C3
C2

C3

C3
C2

C3

C3

C2

C3

C3
C1 C2 C1 C1 C2 C1 C1 C2 C1 C1 C2 C1

C3
C3
C3

C3
C2 C2 C2 C2
C1 C1 C1 C1

(ii) MDS (iv) t-SNE (v) UMAP


C2

C3

C3
C2

C3

C2

C3
C3

C3
C1 C2 C1 C1 C2 C1 C1 C2 C1
C3
C3

C3
C2 C2 C2
C1 C1 C1

g h i
0.7
Reconstruction similarity [r]

5
PCA 1.0
Classification performance

0.6
MDS 0.8 4
0.5
Dimensions

ISOMAP 3
[F1 score]

0.6 0.4
LLE 0.3
0.4 2
LEM 0.2
0.2 0.1 1
t-SNE
0.0 0.0 0
UMAP
1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10



 
 














Number of dimensions Number of dimensions




Fig. 4 Manifold analysis of motor cortical dynamics during a reaching task. (a) Monkey reaching task experi-
mental setup. (b) Arm trajectories and target reaching angles - schematic and coloured trajectories adapted from Yu et al
(2007). (c-d) Neural data processing steps (adapted from Santhanam et al (2009)) (c) Top: X,Y position of monkey arm
over eight individual trials coloured according to (b). Bottom: Corresponding raster plot of spike trains recorded simultane-
ously from 98 neural units, sampled every 1 ms. Units include both well isolated single-neuron units (30% of all units), as
well as multi-neuron units. (d) Population activity vector yielding the instantaneous firing rates of the same neural units
in (c) for the 100 ms before the monkey reached out its arm (separated and coloured by trial type/arm direction as in c).
(e - f ) Linear vs nonlinear manifold embeddings (normalised, first three dimensions only visualised). Insets: projections on
pairs of components, C1 and C2 (upper left), C2 and C3 (upper middle), C1 and C3 (upper right). (e) Linear manifold
embeddings (i) PCA, (ii) MDS. (f ) Nonlinear manifold embeddings (i) Isomap, (ii) LLE, (iii) LEM, (iv) t-SNE, and (v)
UMAP. (g-i) Manifold evaluation metrics (g) Decoding performance [r], (h) Neural activty reconstruction score [r], (i)
Intrinsic manifold dimensionality.

been observed in AD (Palop and Mucke, 2016), AD mouse (e.g. a mouse over-expressing amyloid
was not simulated for the purposes of this exercise, precursor protein, (Oakley et al, 2006)) running
to maintain simplicity. This disease model mimics
the neural activity of hippocampal CA1 cells of an
Springer Nature 2021 LATEX template

a b c
max
1 1

ACC drinking

Ca2+ event rate


Cell order

Cell order
running

turning

rearing
Reward
sites 172 172 0
0 96 0 96
Position Position
5sec
d Linear e Non-Linear
(i) PCA (i) ISOMAP (ii) LLE (iii) LEM

C2

C3

C3

C2

C3

C3
C2

C3

C3
C2

C3

C3

C1 C2 C1 C1 C2 C1 C1 C2 C1 C1 C2 C1
C3

C3

C3
C3
C2

C2

C2
C2

C1 C1 C1 C1

(ii) MDS (iv) t-SNE (v) UMAP


C2

C3

C3

C2

C3

C3
C1
C2

C3

C3

C2 C1 C1 C2 C1
C1 C2 C1
C3

C3

C3
C2

C2

C2
C1 C1 C1

f g h
1.0 0.5 3.5
Classification performance

R econstruction similarity [r ]

PCA 3.0
MDS 0.8 0.4
Dimensionality

2.5
[F1 score]

ISOMAP 0.6 0.3 2.0


LLE 0.4 0.2 1.5
LEM 1.0
0.2 0.1 0.5
t-SNE
0.0 0.0 0.0
UMAP 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10
AP
DS

AP

NE
M
A

E
LL
PC

LE

Number of dimensions Number of dimensions


UM
OM
M

tS
IS

Fig. 5 Manifold analysis of prefrontal cortical dynamics during behaviour. (a-c) Schematic of experimental
setup from Rubin et al (2019). The mouse navigates a linear track with rewards at both ends. (b) Raster plot of the
neural activity during multiple behaviors (adapted from Rubin et al (2019)): (1) drinking, (2) running, (3) turning, and
(4) rearing. (c) Calcium event rate as the mouse runs to the right (left panel) and to the left (right panel) of the linear
track (from Rubin et al (2019)). (e-f ) Linear vs Nonlinear manifold embeddings (normalised). In each case the first three
dimensions are visualised: Insets for each: projections on pairs of components: C1 and C2 (upper left), C2 and C3 (upper
middle), C1 and C3 (upper right). (e) Linear manifold embeddings (i) PCA, (ii) MDS, (f ) Nonlinear manifold embeddings
(i) Isomap, (ii) LLE, (iii) LEM, (iv) t-SNE, and (v) UMAP. (f-h) Manifold evaluation metrics (f ) Decoding performance
[r], (g) Reconstruction score [r], (h) Intrinsic manifold dimensionality.

on a circular track; we compare it with a simu- UMAP as exemplars (Figure 6c and 6d, respec-
lated control model representing a healthy wild- tively), NML shows how the aberrant excitability
type litter-mate (Figure 6a,b). Using MDS and of neurons distorts the neural manifolds for spa-
tial memory in AD (bottom panel) with respect
Springer Nature 2021 LATEX template

15

to the healthy control model (top panel). While In this review, we give insights into how NML
the topological structure remains relatively intact, facilitates accurate quantitative characterisations
the boundaries of the set become fuzzier, akin of the dynamics of large neuronal populations in
to adding noise to the system. To further evalu- both healthy and disease states. However, assess-
ate this, we examined manifold parameterization ing exactly how reliable and informative NML
measures including decoding performance, neu- methods are remains a challenge and is open to
ral reconstruction similarity and manifold intrinsic interpretation. An ideal NML method must be
dimensionality for both NML approaches (Figure able to adapt to different datasets that span func-
6e-m). In both cases, the AD model’s mani- tionally different brain regions, behavioural tasks,
fold embeddings display worse performance than and number of neurons being analysed, while
the control model, although UMAP was able considering different noise levels, timescales, etc.
to recover a more behaviourally informative and In many studies, linear NML methods (such as
lower dimensional embedding than MDS. PCA and classical MDS) have been preferred, and
To investigate the effects of the proportions of are widely used throughout the literature because
aberrant cells on the neural manifolds, we simu- of computational simplicity and clear interpre-
lated the same models with varying percentages tation. However, these methods have not been
of normal, hyperactive and hypoactive cells. As sufficient when the geometry of the neural rep-
expected, the hyperactivity of cells close to amy- resentation is highly non-linear, which might for
loid plaques (which tends to add noise) distorts instance be induced by high behavioural task com-
the neural manifolds more than the hypoactive plexity (Jazayeri and Ostojic, 2021) (see Figure
cells (which is more akin to having a decreased 5). As shown in (Rubin et al, 2019), low signal-
number of neurons in the manifold), resulting in to-noise ratio (SNR) can also affect the quality
less clustering of neural states in the manifold of linear NML embeddings. They compared PCA
space. This is consistent with their effects on per- and LEM as candidate linear and nonlinear algo-
formance in terms of the given manifold measures rithms and revealed that nonlinear NML algo-
(Figure 6f-g, 6i-j, 6l-m). rithms require fewer neurons and lower firing rates
We suggest that this provides us with some for accurate estimation of the internal structure as
basis for thinking that NML techniques exhibit compared to their linear counterparts. To address
potential for helping to improve our understand- these challenges, novel linear algorithms have been
ing of network-level disruption in disease pheno- developed, such as dPCA (Kobak et al, 2016),
types. cvPCA (Stringer et al, 2019), TCA (Williams
et al, 2018), GPFA (Yu et al, 2009), and mTDR
4 Discussion (Aoi and Pillow, 2018). These methods enable
greater discrimination between specific clusters of
4.1 Comparison of neural manifold neural activity, but they do rely heavily on neu-
algorithms ral responses acquired during multiple trials of
time-locked behaviour. However, these may not
With the increase in the numbers of simultane- always be available, and whilst it is possible to
ously recorded neurons facilitated by emerging tweak certain algorithmic parameters or fit them
recording technologies, the demand for reliable to manipulated forms of the data (e.g. through
and comprehensively informative population-level time-warping), the end result is an increase in
analysis methods also increases. Multivariate anal- computational complexity and possibly harder to
yses of large neuronal population activity thus interpret results.
require the use of more efficient computational Conversely, non-linear methods, in gen-
models and algorithms that are geared towards eral, trend towards better generalisation across
finding the fewest possible population-level fea- datasets. This is particularly true for UMAP
tures that best describe the coordinated activity and t-SNE (see Figures 3 and 4). To discrimi-
of such large neuronal populations. These features nate clusters of neural activity, UMAP and t-SNE
must be interpretable and open to parameteriza- have been shown to be the most powerful non-
tion that enables inherent neural mechanisms to linear NML techniques. However, they might not
be described (as discussed in the next section).
Springer Nature 2021 LATEX template

a b c d
100cm MDS Control UMAP Control
0 1

C2

C3

C3

C2

C3

C3
C1 C2 C1 C1 C2 C1
CONTROL

Cell ID
1 50

Angular position (θ)

C3
C3
Normalized ∆F/F
200 150
C1
C2

C2
C1
Normal Hyper Hypo
250
1 MDS AD UMAP AD

350
0
AD MODEL

C2

C3
C3

C2

C3

C3
Cell ID

C1 C2 C1 C1 C2 C1

C3
C3
142
30s Start Finish

C2
C1

C2
C1
e f MDS
g UMAP
0.7

Decoding performance [r]

Decoding performance [r]


0.7
Decoding performance [r ]

1.0 0 0
0.6 0.6
0.8 10 10
0.5 0.5
0.6 20 20
0.4 0.4
α

α
0.4 30 0.3 30 0.3
MDS control
UMAP control 40 40 0.2
0.2 MDS AD 0.2
UMAP AD 50 0.1 0.1
0.0 50
1 2 3 4 5 0 10 20 30 40 50 0 10 20 30 40 50 0.0
Number of dimensions β β
h i MDS 0.150 j UMAP

Reconstruction similarity [r]


Reconstruction similarity [r]
0.20
Reconstruction similarity [r ]

0.8 0 0.125 0
10 0.100 10 0.15
0.6
20 0.075 20
0.4 0.050 0.10
α

30 30
MDS control 0.025
0.2 UMAP control 40 0.05
40
MDS AD 0.000
UMAP AD 50 50 0.00
0.0
1 2 3 4 5 0 10 20 30 40 50 0 10 20 30 40 50
Number of dimensions β β

k l MDS m UMAP
2.6
5 2.05

Intrinsic Dimensionality
0
Intrinsic Dimensionality

0
2.00 2.4
4 10
Dimensionality

10
Dimensionality

1.95
3 20 2.2
1.90 20
α

2 30 1.85 30 2.0

1 40 1.80 40 1.8
50 1.75
0 50 1.6
MDS MDS UMAP UMAP 0 10 20 30 40 50 1.70 0 10 20 30 40 50
Control AD Control AD β β

Fig. 6 Changes in neural manifolds during neurodegenerative disease states. (a) Example calcium traces for
a simulated mouse traversing a circular track (simulation of real experiment in Figure 3). Top: 20 of 200 “normal” CA1
place cells (magenta traces on top show mouse trajectory). Bottom: 20 of 200 AD-affected place cells that fall into three
categories: Normal (black), Hyperactive (red) and Hypoactive (blue). (b) Normalised activity rate maps for 200 healthy
and 142 (out of 200) AD hippocampal CA1 place and hyperactive cells (respectively) sorted by location of maximal activity.
(c-d) 3-dimensional projection in the manifold space using (c) MDS and (d) UMAP for both healthy control and AD
simulated models. Inset: 2D projections onto pairs of dimensions. (e-m) Evaluation metrics for MDS and UMAP, Control
vs AD and shown as a function of varying percentages of normal, hyperactive (α) and hypoactive (β) cells. (e-g) Decoding
performance [r]. (h-j) Reconstruction similarity [r]. (k-m) Intrinsic dimensionality.
Springer Nature 2021 LATEX template

17

always yield an embedding that is representa- phase space represents possibles states of the sys-
tive of the true high dimensional geometry. These tem. It is a powerful tool in characterising the
issues are particularly relevant for inherently very behaviour of a dynamical system, especially when
high dimensional data, such as genomics data observing how states change over time. A two-
(Chari et al, 2021). However, previous studies dimensional plot, called the recurrence plot, mea-
on neural population dynamics have shown them sures and visualises the recurrences of a trajectory
to be intrinsically low-dimensional, as shown in of a system in phase space. Note that although
(Churchland et al, 2012; Cueva et al, 2020; Chaud- the recurrent plot is inherently two-dimensional,
huri et al, 2019). recurrence analysis can describe dynamics in an
One message that can be taken away from our arbitrary dimensional space - a key feature of
comparison of neural manifold approaches is that this technique. As the respective phase spaces of
linear methods do provide substantial insight over two systems change, recurrence plots allow quan-
a wide range of problem domains (often consistent tification of their interaction and tell us when
with what is provided by nonlinear methods), and similar states of the underlying system occur. The
even in areas where they fail to correctly capture time evolution of trajectories can be observed
the low-dimensional topological structure of the in the patterns depicted in the recurrence plots,
manifold, they can still provide insight verifiable and these patterns are associated with a specific
through decoding of behaviour. behaviour of the system, such as cyclicity and sim-
ilarity of the evolution of states at different epochs.
4.2 Parameterization of neural Different measures can be used to quantifying such
manifolds recurrences based on the structure of the recur-
rence plot, including recurrence rate, determinism
A key issue for NML approaches is that after the (predictibility), divergence, laminarity, and Shan-
neural manifold has been determined, it normally non entropy (Webber and Marwan, 2015; Wallot,
needs to be analysed further to answer a given 2019). These measures might be especially useful
scientific question. This process, known as param- when observing neuronal responses to stimuli over
eterization, would ideally involve extraction of the repeated trials under different conditions (such as
equations of the manifold itself, however, out of healthy and diseased states).
feasibility may also be based on a more limited Another way to compare neural manifolds
quantification of manifold properties. Depending under different conditions is to directly quan-
on the type of neural data at hand and the tify the similarity of trajectories in the neural
scientific hypothesis to be tested, it’s important manifold space (Cleasby et al, 2019; Ding et al,
to select the most suited and informative NML 2008; Alt, 2009). The most commonly used mea-
algorithm and parameterization measures. One sures of trajectory similarity are Fréchet distance
challenge that has to be kept in mind is that (FD) (Fréchet, 1906; Besse et al, 2015), near-
the resulting manifolds, although often visualised est neighbour distance (NND) (Clark and Evans,
in two or three dimensions for convenience, fre- 1954; Freeman et al, 2011), longest common subse-
quently involve many more dimensions (e.g. even quence (LCSS) (Vlachos et al, 2002), edit distance
a low-dimensional neural manifold from a record- for real sequences (EDR) (Chen and Ng, 2004;
ing from many hundreds of neurons might involve Chen et al, 2005) and dynamic time warping
tens of dimensions), and many approaches that (DTW) (Toohey, 2015; Long and Nelson, 2013).
may work well in two or three dimensions do Computing such measures may allow us to charac-
not necessarily scale straightforwardly to higher terise the differences (or similarities) among neural
dimensional representations. manifolds obtained from different models. This
Where transitions in the dynamics of neural type of analysis may be especially useful when
states are observed over time, one approach that is comparing models in health and disease states
useful is recurrence analysis (Marwan et al, 2007; across age groups, or when comparing neural man-
Wallot, 2019). Recurrence analysis is a nonlinear ifold representations of the same model across
data analysis that is based on the study of phase different environments and conditions.
space trajectories, where a point or element in
Springer Nature 2021 LATEX template

Another useful measure is manifold trajec- 4.3 Is neural manifold learning


tory tangling (Russo et al, 2018). Tangling is a useful for understanding
simple way of determining whether a particular neurological disorders?
trajectory could have been generated by a smooth
dynamical flow field; under smooth dynamics, Neural manifold learning potentially offers the
neural trajectories should not be tangled. Two prospect of aiding in our understanding circuit
neural states are thus tangled if they are nearby neuropathologies. For instance, in mouse models
but associated with different trajectory directions. of Alzheimer’s disease, amyloid or tau patholo-
High tangling implies either that the system must gies result in changes in cortical circuitry, which
rely on external commands rather than inter- are particularly evident in the hippocampus and
nal dynamics, or that the system is flirting with connected structures. However, the effect of these
instability (Russo et al, 2018). Thus, tangling pathologies on the dynamics of neural circuits
can be compared across all times and/or condi- involved in spatial and working memories at the
tions. It is especially useful when characterising network level are still not well understood. By
neural dynamics over the course of learning or analysing the changes in the geometry of popu-
development. For example, it may be interesting lation responses and the time-evolving trajectory
to examine whether neural circuits adopt net- of activity of the associated hippocampal-cortical
work trajectories that are increasingly less tangled circuits, we can compare the neural manifolds
when learning a new skill and with increasing recovered from groups of mice of different ages
performance. Conversely, it may also be interest- and/or with . In particular, we can use recur-
ing to investigate whether pathological conditions rence analysis to determine whether neural states
may be associated with increased tangling during recur upon presentation of the same stimuli over
learning. repeated trials at different times across different
Lastly, in cases where we characterise ages of the models. Neural manifold similarity,
task-specific responses, e.g., during repetitive, tangling and divergence measures may also be
trial-structured motor movements, determining computed to evaluate the differences (or simi-
whether these neural responses are consistent larity) of movement trajectories in the neural
with the hypothesised computation is important. manifold space. Together, these measures can be
Hence, the consistency in the geometry of the used to compare the manifold dynamics across
population response and/or the time-evolving different age groups in both healthy and disease
trajectory of activity in neural state space (i.e., states.
manifold space) must be examined. For such In this review we have demonstrated that neu-
task-specific trials, manifold trajectories are con- ral manifold learning methods provide a powerful
sidered consistent if the trajectory segments trace toolbox for understanding how populations of neu-
the same path and do not diverge. One measure rons coordinate in representing behaviourally rele-
that can be used for such analysis is manifold tra- vant information. While the cellular and molecular
jectory divergence (Russo et al, 2020). In contrast pathologies underlying a variety of neurological
to manifold trajectory tangling, which assesses disorders such as Alzheimer’s Disease, Parkin-
whether trajectories are consistent with a locally son’s Disease and Frontotemporal Dementia are
smooth flow field, manifold trajectory divergence at least beginning to be well understood, how
assesses whether similar paths eventually sep- they translate into network dysfunction and thus
arate, smoothly or otherwise. A trajectory can into cognitive and behavioural deficits is not. Neu-
have low tangling but high divergence, or vice ral manifold learning techniques, in combination
versa. Multiple features can contribute to diver- with new experimental technologies allowing us to
gence, including ramping activity, cycle-specific record the activity of many thousands of neurons
responses, and different subspaces on different simultaneously during behavioural tasks, poten-
cycles. Thus, manifold trajectory divergence pro- tially may allow us to assay the dynamics and
vides a useful summary of a computationally Gestalt representations underlying cognition and
relevant property, regardless of the specifics of behaviour at the level of entire neural circuits.
how it was achieved (Russo et al, 2020).
Springer Nature 2021 LATEX template

19

This could in turn lead to improved understand- References


ing of disease processes and more sensitive tests of
therapeutic effect. Ahrens MB, Li JM, Orger MB, et al (2012) Brain-
wide neuronal dynamics during motor adapta-
Acknowledgments. We thank Krishna V tion in zebrafish. Nature 485(7399):471–477
Shenoy and Byron Yu for their permission to use
the macaque motor cortex dataset and Alon Rubin Alt H (2009) The computational geometry of
for permission to use the mouse ACC dataset. We comparing shapes. In: Efficient Algorithms.
thank Elena Faillace for useful comments on a Springer, p 235–248
previous version of this manuscript.
Aoi MC, Pillow JW (2018) Model-based targeted
Funding. This work was funded by studentships dimensionality reduction for neuronal popu-
from the EPSRC CDT in Neurotechnology for Life lation data. Advances in Neural Information
and Health (EPSRC EP/L016737/1) to S.P. and Processing Systems 31:6690–6699
G.P.G., by Wellcome Trust Award 22152/Z/20/Z
(S.R.S. and M.A.G.), by Wellcome Trust Award Belkin M (2003) Laplacian eigenmaps for dimen-
207481/Z/17/Z (salary support to RM-H from sionality reduction and data representation.
R.G.M. Morris) and by Mrs Anne Uren and The Neural Computation 15(6):1373–1396
Michael Uren Foundation.
Besse P, Guillouet B, Loubes JM, et al
Rights retention. This research was funded
(2015) Review and perspective for distance
in whole, or in part, by the Wellcome Trust
based trajectory clustering. arXiv preprint
[22152/Z/20/Z,207481/Z/17/Z]. For the purpose
arXiv:150804904
of Open Access, the author has applied a CC BY
public copyright licence to any Author Accepted Blair DC (1979) Information Retrieval, 2nd Edi-
Manuscript version arising from this submission. tion. Journal of the American Society for Infor-
Conflict of interest/Competing interests. mation Science
The authors declare that there are no conflicts of
interest. Bouchard KE, Mesgarani N, Johnson K, et al
(2013) Functional organization of human senso-
Code availability. Code to generate some of rimotor cortex for speech articulation. Nature
the figures in this paper is available at: https:// 495(7441):327–332
github.com/schultzlab/Neural Manifolds
Briggman KL, Abarbanel HD, Kristan WB (2005)
Authors’ contributions. Rufus Mitchell-
Optical imaging of neuronal populations during
Heggs co-led and managed the project wrote and
decision-making. Science 307(5711):896–901
reviewed the main manuscript text, designed code
for figures 3-5 and co-prepared figures 1,3,4,5,6. Busche MA, Eichhoff G, Adelsberger H,
Seigfred Prado wrote and reviewed the main et al (2008) Clusters of hyperactive neu-
manuscript text, designed code for figures 6 and rons near amyloid plaques in a mouse
co-prepared figures 1 and 5. Giuseppe P. Gava model of Alzheimer’s disease. Science
wrote and reviewed the main manuscript text and 321(5896):1686–1689
designed the majority of manifold learning code
for figures 3-6 and created figure 2. Mary Ann Chandrasekaran B, Gandour JT, Krishnan A
Go contributed experimental data used for figure (2007) Neuroplasticity in the processing of pitch
3 and reviewed the main manuscript. Simon R. dimensions: A multidimensional scaling anal-
Schultz co-led and managed the project, wrote ysis of the mismatch negativity. Restorative
and reviewed the main manuscript. neurology and neuroscience 25(3-4):195–210

Chari T, Banerjee J, Pachter L (2021)


The specious art of single-cell genomics.
bioRxiv:0825457696
Springer Nature 2021 LATEX template

Chaudhuri R, Gerçek B, Pandey B, et al (2019) performance on individual trials. Journal of


The intrinsic attractor manifold and popula- Neuroscience 30(45):15,241–15,253
tion dynamics of a canonical cognitive circuit
across waking and sleep. Nature Neuroscience Cueva CJ, Saez A, Marcos E, et al (2020)
22:1512–1520 Interpretinglow-dimensional dynamics for work-
ing memory and time encoding. Proceedings of
Chen L, Ng R (2004) On the marriage of lp-norms the National Academy of Sciences 117:23,021–
and edit distance. In: Proceedings of the Thirti- 23,032
eth international conference on Very large data
bases-Volume 30, pp 792–803 Cunningham JP, Yu BM (2014) Dimensional-
ity reduction for large-scale neural recordings.
Chen L, Özsu MT, Oria V (2005) Robust and Nature Neuroscience 17(11):1500–1509
fast similarity search for moving object trajec-
tories. In: Proceedings of the 2005 ACM SIG- Dellacherie D, Bigand E, Molin P, et al (2011)
MOD international conference on Management Multidimensional scaling of emotional responses
of data, pp 491–502 to music in patients with temporal lobe resec-
tion. Cortex 47(9):1107–1115
Chestek CA, Batista AP, Santhanam G, et al
(2007) Single-neuron stability during repeated Dijkstra EW, et al (1959) A note on two prob-
reaching in macaque premotor cortex. Journal lems in connexion with graphs. Numerische
of Neuroscience 27(40):10,742–10,750 Mathematik 1(1):269–271

Churchland M, Cunningham J, Kaufman MT, Dimitriadis G, Neto JP, Kampff AR (2018) t-sne
et al (2010) Cortical preparatory activity: repre- visualization of large-scale neural recordings.
sentation of movement or first cog in a dynam- Neural Computation 30(7):1750–1774
ical machine? Neuron 68(3):387–400
Ding H, Trajcevski G, Scheuermann P, et al
Churchland M, Cunningham J, Kaufman M, (2008) Querying and mining of time series data:
et al (2012) Neural population dynamics during experimental comparison of representations and
reaching. Nature 487:51–56 distance measures. Proceedings of the VLDB
Endowment 1(2):1542–1552
Churchland MM, Shenoy KV (2007) Temporal
complexity and heterogeneity of single-neuron Driscoll LN, Pettit NL, Minderer M, et al (2017)
activity in premotor and motor cortex. Journal Dynamic reorganization of neuronal activity
of Neurophysiology 97(6):4235–4257 patterns in parietal cortex. Cell 170(5):986–999

Clark PJ, Evans FC (1954) Distance to nearest Elsayed GF, Lara AH, Kaufman MT, et al (2016)
neighbor as a measure of spatial relationships in Reorganization between preparatory and move-
populations. Ecology 35(4):445–453 ment population responses in motor cortex.
Nature Communications 7(1):1–15
Cleasby IR, Wakefield ED, Morrissey BJ, et al
(2019) Using time-series similarity measures Feulner B, Clopath C (2021) Neural mani-
to compare animal movement trajectories in fold under plasticity in a goal driven learn-
ecology. Behavioral Ecology and Sociobiology ing behaviour. PLoS Computational Biology
73(11):1–19 17(2):e1008,621

Cohen MR, Kohn A (2011) Measuring and inter- France SL, Carroll JD (2010) Two-way multidi-
preting neuronal correlations. Nature Neuro- mensional scaling: A review. IEEE Transactions
science 14(7):811 on Systems, Man, and Cybernetics, Part C
(Applications and Reviews) 41(5):644–661
Cohen MR, Maunsell JH (2010) A neuronal popu-
lation measure of attention predicts behavioral Fréchet MM (1906) Sur quelques points du calcul
fonctionnel. Rendiconti del Circolo Matematico
Springer Nature 2021 LATEX template

21

di Palermo (1884-1940) 22(1):1–72 Irimia A, Lei X, Torgerson CM, et al (2018) Sup-


port vector machines, multidimensional scaling
Freeman R, Mann R, Guilford T, et al (2011) and magnetic resonance imaging reveal struc-
Group decisions and individual differences: tural brain abnormalities associated with the
route fidelity predicts flight leadership in hom- interaction between autism spectrum disorder
ing pigeons (columba livia). Biology letters and sex. Frontiers in Computational Neuro-
7(1):63–66 science p 93
Frost NA, Haggart A, Sohal VS (2021) Dynamic Ivosev G, Burton L, Bonner R (2008) Dimen-
patterns of correlated activity in the prefrontal sionality reduction and visualization in princi-
cortex encode information about social behav- pal component analysis. Analytical chemistry
ior. PLoS Biology 19(5):e3001,235 80(13):4933–4944
Fusi S, Miller EK, Rigotti M (2016) Why neurons Jackson JE (2005) A user’s guide to principal
mix: high dimensionality for higher cognition. components. John Wiley & Sons
Current Opinion in Neurobiology 37:66–74
Jazayeri M, Ostojic S (2021) Interpreting neu-
Gallego J, Perich M, Chowdhury R, et al ral computations by examining intrinsic and
(2020) Long-term stability of cortical popula- embedding dimensionality of neural activity.
tion dynamics underlying consistent behavior. Current Opinion in Neurobiology 70:113–120
Nature Neuroscience 23:1–11
Johnson W, Lindenstrauss J (1984) Extensions of
Gallego JA, Perich MG, Miller LE, et al (2017) lipschitz maps into a hilbert space. Contempo-
Neural manifolds for the control of movement. rary Mathematics 26:189–206
Neuron 94(5):978–984
Jolliffe IT (2002) Principal component analysis for
Gao P, Ganguli S (2015) On simplicity and com- special types of data. Springer
plexity in the brave new world of large-scale
neuroscience. Current Opinion in Neurobiology Kaufman MT, Churchland MM, Ryu SI, et al
32:148–155 (2014) Cortical activity in the null space: per-
mitting preparation without movement. Nature
Go MA, Rogers J, Gava GP, et al (2021) Place Neuroscience 17(3):440–448
cells in head-fixed mice navigating a floating
real-world environment. Frontiers in Cellular Kingsbury L, Huang S, Wang J, et al (2019) Cor-
Neuroscience 15:19 related neural activity and encoding of behavior
across brains of socially interacting animals.
Grassberger P, Procaccia I (1983) Characteri- Cell 178(2):429–446
zation of strange attractors. Physical Review
Letters 50(5):346 Kobak D, Brendel W, Constantinidis C, et al
(2016) Demixed principal component analysis of
Harvey CD, Coen P, Tank DW (2012) Choice- neural population data. eLife 5:e10,989
specific sequences in parietal cortex dur-
ing a virtual-navigation decision task. Nature Krauss P, Metzner C, Schilling A, et al (2018)
484(7392):62–68 A statistical method for analyzing and compar-
ing spatiotemporal cortical activation patterns.
Hosmer DW, Jovanovic B, Lemeshow S (1989) Scientific reports 8(1):1–9
Best subsets logistic regression. Biometrics pp
1265–1270 Kruskal J, Wish M (1978) Multidimensional Scal-
ing. Sage Publications
Humphries MD (2020) Strong and weak principles
of neural dimension reduction. arXiv preprint Laurent G (2002) Olfactory network dynamics and
arXiv:201108088 the coding of multidimensional signals. Nature
Springer Nature 2021 LATEX template

Reviews Neuroscience 3(11):884–895 Mimica B, Dunn BA, Tombaz T, et al (2018)


Efficient cortical coding of 3d posture in freely
Lee EK, Balasubramanian H, Tsolias A, et al behaving rats. Science 362(6414):584–589
(2021) Non-linear dimensionality reduction on
extracellular waveforms reveals cell type diver- Morris R (2006) Elements of a neurobiological the-
sity in premotor cortex. eLife 10:e67,490 ory of hippocampal function: the role of synap-
tic plasticity, synaptic tagging and schemas.
Long JA, Nelson TA (2013) A review of quantita- European Journal of Neuroscience 23(11):2829–
tive methods for movement data. International 2846
Journal of Geographical Information Science
27(2):292–318 Nieh EH, Schottdorf M, Freeman NW, et al (2021)
Geometry of abstract learned knowledge in the
Low RJ, Lewallen S, Aronov D, et al (2018) Prob- hippocampus. Nature pp 1–5
ing variability in a cognitive map using manifold
inference from neural dynamics. bioRxiv:418939 Niell CM, Stryker MP (2010) Modulation of visual
responses by behavioral state in mouse visual
Luczak A, Barthó P, Harris KD (2009) Spon- cortex. Neuron 65(4):472–479
taneous events outline the realm of possible
sensory responses in neocortical populations. Oakley H, Cole SL, Logan S, et al (2006) Intraneu-
Neuron 62(3):413–425 ronal β-amyloid aggregates, neurodegeneration,
and neuron loss in transgenic mice with five
Van der Maaten L, Hinton G (2008) Visualizing familial alzheimer’s disease mutations: potential
data using t-sne. Journal of Machine Learning factors in amyloid plaque formation. Journal of
Research 9(11) Neuroscience 26(40):10,129–10,140
Machens CK, Romo R, Brody CD (2010) Func- O’Keefe J, Conway DH (1978) Hippocampal place
tional, but not anatomical, separation of “what” units in the freely moving rat: why they fire
and “when” in prefrontal cortex. Journal of where they fire. Experimental Brain Research
Neuroscience 30(1):350–360 31(4):573–590

Mante V, Sussillo D, Shenoy KV, et al Palop JJ, Mucke L (2016) Network abnormalities
(2013) Context-dependent computation by and interneuron dysfunction in alzheimer dis-
recurrent dynamics in prefrontal cortex. Nature ease. Nature Reviews Neuroscience 17(12):777–
503(7474):78–84 792
Marwan N, Romano MC, Thiel M, et al (2007) Pandarinath C, Gilja V, Blabe CH, et al (2015)
Recurrence plots for the analysis of complex Neural population dynamics in human motor
systems. Physics Reports 438(5-6):237–329 cortex during movements in people with als.
Elife 4:e07,436
Mazor O, Laurent G (2005) Transient dynamics
versus fixed points in odor representations by Pandarinath C, O’Shea DJ, Collins J, et al
locust antennal lobe projection neurons. Neuron (2018) Inferring single-trial neural popula-
48(4):661–673 tion dynamics using sequential auto-encoders.
Nature Methods 15:805–815
McInnes L, Healy J, Melville J (2018) Umap: Uni-
form manifold approximation and projection for Pang R, Lansdell BJ, Fairhall AL (2016) Dimen-
dimension reduction. arXiv:180203426 sionality reduction in neuroscience. Current
Biology 26(14):R656–R660
Meshulam L, Gauthier JL, Brody CD, et al
(2017) Collective behavior of place and non- Panta SR, Wang R, Fries J, et al (2016) A tool
place neurons in the hippocampal network. for interactive data visualization: application to
Neuron 96(5):1178–1191 over 10,000 brain imaging and phantom mri
Springer Nature 2021 LATEX template

23

data sets. Frontiers in Neuroinformatics 10:9 Cognitive Neurosciences. MIT Press, chap 19, p
337
Peters AJ, Chen SX, Komiyama T (2014) Emer-
gence of reproducible spatiotemporal activity Sanger TD, Kalaska JF (2014) Crouching
during motor learning. Nature 510(7504):263– tiger, hidden dimensions. Nature Neuroscience
267 17(3):338–340

Phoka E, Wildie M, Schultz SR, et al (2012) Santhanam G, Yu B, Gilja V, et al (2009)


Sensory experience modifies spontaneous state Factor-analysis methods for higher-performance
dynamics in a large-scale barrel cortical neural prostheses. Journal of Neurophysiology
model. Journal of Computational Neuroscience 102(2):1315–1330
33(2):323–339
Scoville WB, Milner B (1957) Loss of recent mem-
Pouget A, Dayan P, Peter R (2000) Informa- ory after bilateral hippocampal lesions. Jour-
tion processing with population codes. Nature nal of Neurology, Neurosurgery, and Psychiatry
Reviews Neuroscience 1:125––132 20(1):11

Rajan K, Harvey CD, Tank DW (2016) Recur- Shadlen MN, Newsome WT (1998) The vari-
rent network models of sequence generation and able discharge of cortical neurons: implications
memory. Neuron 90(1):128–142 for connectivity, computation, and information
coding. Journal of Neuroscience 18(10):3870–
Raposo D, Kaufman MT, Churchland AK (2014) 3896
A category-free neural population supports
evolving demands during decision-making. Stokes MG, Kusunoki M, Sigala N, et al (2013)
Nature Neuroscience 17(12):1784–1792 Dynamic coding for cognitive control in pre-
frontal cortex. Neuron 78(2):364–375
Rigotti M, Barak O, Warden MR, et al (2013)
The importance of mixed selectivity in complex Stringer C, Pachitariu M, Steinmetz N, et al
cognitive tasks. Nature 497(7451):585–590 (2019) High-dimensional geometry of pop-
ulation responses in visual cortex. Nature
Roweis ST, Saul LK (2000) Nonlinear dimen- 571(7765):361–365
sionality reduction by locally linear embedding.
Science 290(5500):2323–2326 Stringer C, Michaelos M, Tsyboulski D, et al
(2021) High-precision coding in visual cortex.
Rubin A, Sheintuch L, Brande-Eilat N, et al Cell 184(10):2767–2778
(2019) Revealing neural correlates of behav-
ior without behavioral measurements. Nature Sun G, Zhang S, Zhang Y, et al (2019) Effective
Communications 10(1):1–14 dimensionality reduction for visualizing neu-
ral dynamics by laplacian eigenmaps. Neural
Russo AA, Bittner SR, Perkins SM, et al (2018) Computation 31(7):1356–1379
Motor cortex embeds muscle-like commands
in an untangled population response. Neuron Sussillo D, Churchland MM, Kaufman MT, et al
97(4):953–966 (2015) A neural network that finds a natu-
ralistic solution for the production of muscle
Russo AA, Khajeh R, Bittner SR, et al (2020) activity. Nature Neuroscience 18(7):1025–1033
Neural trajectories in the supplementary motor
area and motor cortex exhibit distinct geome- Tenenbaum JB, De Silva V, Langford JC
tries, compatible with different classes of com- (2000) A global geometric framework for
putation. Neuron 107(4):745–758 nonlinear dimensionality reduction. Science
290(5500):2319–2323
Rust NC (2014) Population-based representa-
tions. In: Gazzaniga MS, Mangun GR (eds) The
Springer Nature 2021 LATEX template

Tolhurst DJ, Movshon JA, Dean AF (1983) The Neuron 98(6):1099–1115


statistical reliability of signals in single neu-
rons in cat and monkey visual cortex. Vision Wu A, Roy NA, Keeley S, et al (2017) Gaussian
Research 23(8):775–785 process based nonlinear latent structure discov-
ery in multivariate spike train data. Advances in
Tombaz T, Dunn BA, Hovde K, et al (2020) Neural Information Processing Systems 30:3496
Action representation in the mouse parieto-
frontal network. Scientific reports 10(1):1–14 Yin H (2008) On multidimensional scaling and
the embedding of self-organising maps. Neural
Toohey K (2015) Similaritymeasures: trajectory Networks 21(2-3):160–169
similarity measures. R package version 1
Youngentob SL, Johnson BA, Leon M, et al
Tzagarakis C, Jerde TA, Lewis SM, et al (2009) (2006) Predicting odorant quality perceptions
Cerebral cortical mechanisms of copying geo- from multidimensional scaling of olfactory bulb
metrical shapes: a multidimensional scaling glomerular activity patterns. Behavioral neuro-
analysis of fmri patterns of activation. Experi- science 120(6):1337
mental brain research 194(3):369–380
Yu BM, Kemere C, Santhanam G, et al (2007)
Van Der Maaten L (2014) Accelerating t-sne using Mixture of trajectory models for neural decod-
tree-based algorithms. The Journal of Machine ing of goal-directed movements. Journal of Neu-
Learning Research 15:3221–3245 rophysiology 97(5):3763–3780

Venna J, Kaski S (2006) Local multidimensional Yu BM, Cunningham JP, Santhanam G, et al


scaling. Neural Networks 19(6-7):889–899 (2009) Gaussian-process factor analysis for low-
dimensional single-trial analysis of neural pop-
Vlachos M, Kollios G, Gunopulos D (2002) Dis- ulation activity. Journal of Neurophysiology
covering similar multidimensional trajectories. 102(1):614–635
In: Proceedings 18th International Conference
on Data Engineering, IEEE, pp 673–684 Zhang Z, Wang J (2007) MLLE: Modified
locally linear embedding using multiple weights.
Vyas S, Golub MD, Sussillo D, et al (2020) Com- Advances in Neural Information Processing Sys-
putation through neural population dynamics. tems 19:1593–1600
Annual Review of Neuroscience 43:249–275 99
Wallot S (2019) Multidimensional cross-
recurrence quantification analysis (mdcrqa)–a
method for quantifying correlation between
multivariate time-series. Multivariate
Behavioral Research 54(2):173–191

Warland DK, Reinagel P, Meister M (1997)


Decoding visual information from a population
of retinal ganglion cells. Journal of Neurophys-
iology 78(5):2336–2350

Webber C, Marwan N (2015) Recurrence quantifi-


cation analysis. Theory and Best Practices

Williams AH, Kim TH, Wang F, et al


(2018) Unsupervised discovery of demixed, low-
dimensional neural dynamics across multiple
timescales through tensor component analysis.

You might also like