Professional Documents
Culture Documents
When and Why Noise Neural
When and Why Noise Neural
Systems/Circuits
Information may be encoded both in the individual activity of neurons and in the correlations between their activities. Understanding
whether knowledge of noise correlations is required to decode all the encoded information is fundamental for constructing computa-
tional models, brainmachine interfaces, and neuroprosthetics. If correlations can be ignored with tolerable losses of information, the
readout of neural signals is simplified dramatically. To that end, previous studies have constructed decoders assuming that neurons fire
independently and then derived bounds for the information that is lost. However, here we show that previous bounds were not tight and
overestimated the importance of noise correlations. In this study, we quantify the exact loss of information induced by ignoring noise
correlations and show why previous estimations were not tight. Further, by studying the elementary parts of the decoding process, we
determine when and why information is lost on a single-response basis. We introduce the minimum decoding error to assess the
distinctive role of noise correlations under natural conditions. We conclude that all of the encoded information can be decoded without
knowledge of noise correlations in many more situations than previously thought.
process. Altogether, we provide a complete framework for assess- PNI RSk PSk
ing when and why information is lost by NI decoders. P NI S k R
k
PNI RSk PSk
(5)
sponses) may severely restrict the length of the sequences, thus reducing In criterion 8a (Quian Quiroga and Panzeri, 2009; Ince et al., 2010), the
the decoded information below the bound. The information lost by a information loss INI measures the difference between the encoded in-
given decoding algorithm is defined as formation and the information decoded by a specific implementation of
the NI decoder chosen by the researcher. In criterion 8b (Ince et al.,
I Dec I S;R IS;SDec 0. (2)
2010), SkNI (1 k K ) depends on the population response R and
When IDec is greater than zero, some information (IDec) about the represents the k th most likely stimulus if neurons were noise indepen-
stimulus S, encoded in the population response R, is lost during the dent. Therefore, I NI
LS
measures the difference between the encoded in-
decoding process. In other words, the decoder has ignored some infor- formation and the information extracted by a decoder that ranks the set
mation (IDec) that may have improved the stimulus estimation. If IDec of stimuli with respect to their NI posterior probability. In criterion 8c
is zero, the decoding process is optimal; that is, it decodes all of the (Nirenberg and Latham, 2003; Latham and Nirenberg, 2005), D repre-
encoded information. Further discussion on the meaning of IDec can be sents the conditional Kullback-Leibler divergence and therefore, I NI D
found in Eyherabide and Samengo (2010) and references therein. measures the departure of PNI(SR) from P(SR). In criterion 8d (Latham
The family of NI decoders. NI decoders are here defined as probabilistic and Nirenberg, 2005; Oizumi et al., 2010):
decoders (i.e., decoders that infer the stimulus from the conditional
INI
DL
DPSR PNI SR, (9a)
probability distribution of the response) constructed under the NI as-
sumption (the assumption that neurons are noise independent). Math-
ematically, the NI assumption states that the probability P(RSk) of the
population response R [R1, . . . , RN] (N is the number of neurons in
P NI S R, PS N
n1
PRn S , (9b)
the population) elicited by the stimulus Sk can be inferred from the where is a real number the value of which is chosen to minimize
probability P(RnSk) of each neuron in the population as follows: I NI
DL
(). The quantity I NI
DL
measures the information loss induced by a
classical NI decoder when operating on long sequences of responses.
N
P RSk PNI RSk PRn Sk . (3) Both I NID
(criterion 8c) and I NI
DL
(criterion 8d) are intended to provide
n1 information-theoretical estimators that do not require building specific
NI decoders.
Here, PNI(RSk) is called the NI likelihood. By multiplying the probabil-
Decoding error. The decoding error is here defined as the average cost
ities of individual neurons P(RnSk), NI decoders neglect all noise corre-
of mistakenly estimating the stimulus S that elicited a population re-
lations among neurons. Once the NI assumption is made, several NI
sponse R, namely:
decoders can be constructed as follows:
Dec E S, SDec , (10)
S NI R f NIL PNI RS1 , . . ., PNI RSK . (4) P S,S Dec
using different algorithms f NIL for extracting the decoded stimulus SNI where (S,SDec) is the non-negative cost of inferring SDec when the en-
from the NI likelihoods (K is the number of stimuli). This construction is coded stimulus was S (Duda et al., 2000; Bishop, 2006).
here called the canonical NI decoder. The minimum decoding error for all possible decoders based on a
The best-known construction of NI decoders is here called classical NI specific representation R of the population response is given by the
decoder, which is based on the NI posterior probabilities: following:
Eyherabide and Samengo Noise Correlations in Neural Decoding J. Neurosci., November 6, 2013 33(45):1792117936 17923
A D G J
B E H K
C F I L
where the minimization runs over all decoded stimuli (Bishop, 2006; Results
Hastie et al., 2009). This minimum is achievable by a decoder defined as Shortcomings of previous measures of information loss
follows: The importance of noise correlations in neural decoding has been
assessed by comparing the encoded information with the infor-
S Dec arg min E S,S . (12) mation extracted by NI decoders, which can be constructed in
S P S R) many different ways. The minimum difference between these two
The specific decoders that achieve the minimum decoding error depend quantities is the minimum information loss INI Min
induced by NI
on the shape of (Simoncelli, 2009). For example, the decoding-error decoders. If INI
Min
is greater than zero, then noise correlations are
probability (also known as fraction incorrect or error rate), can be ob- important: By neglecting them, information is lost. The mini-
tained by setting S,SDec equal to zero if S and SDec coincide or to unity mum information loss INI Min
has been estimated in several ways.
otherwise. The decoder that achieves the minimum decoding-error In this section, we compare the four most widely used estimators
probability is given by the following: and show that they all tend to overestimate the importance of
S Dec arg min PSR. (13) noise correlations in neural decoding. The results are illustrated
S with three examples in Figure 1. Previous studies have concluded
Decoders based on surrogate responses. Previous studies have also that, in these examples, noise correlations are important for de-
proposed to study the importance of noise correlations using coding. We demonstrate, however, that these conclusions are not
decoders that were optimized for decoding the surrogate popu- valid in general: they may or may not hold depending on the
lation activity that would be elicited if neurons were truly noise stimulus and response probabilities and on the amount of corre-
independent (Nirenberg et al., 2001; Berens et al., 2012). For each lation in the responses. In the first two examples (Fig. 1 A, B),
stimulus, this set of surrogate NI population responses {RNI} is responses to stimulus S1 are negatively correlated and responses
computed as the Cartesian product of the sets {Rn}, the elements to stimulus S2 are positively correlated. In the third example (Fig.
of which are the individual responses of the nth neuron: 1C), continuous responses are analyzed and the amount of cor-
relation or anticorrelation of responses to stimulus S2 is not fixed,
RNI R1 . . . RN . (14) but rather depends on the value of a given parameter.
17924 J. Neurosci., November 6, 2013 33(45):1792117936 Eyherabide and Samengo Noise Correlations in Neural Decoding
classical NI decoder always estimates the correct stimulus except the example shown in Figure 1A. The estimation of INI DL
involves
for response [R1, R2] [ M, M]. Whenever a minimization problem (Eq. 8d), leading to:
P NI S 2 M, M P NI S 1 M, M ,
(15)
0 if PNI M, MS1 PNI M, MS2
the classical NI decoder infers that the response [ M, M] was elic- I NI
DL
I D if P M, MS P M, MS .
NI NI 1 NI 2
ited by stimulus S1, whereas an optimal decoder constructed with
knowledge of noise correlations always decodes the stimulus S2. (17)
Hence, INI is greater than zero. However, if Equation 15 does
not hold, the stimuli decoded by these two decoders always coin- By comparing Equations 17 and 15, we find that whenever
cide and thus INI is zero. When varying the stimulus and re- PNI( M, MS1) PNI( M, MS2) and P(S2) P(S1), the classical NI
sponse probabilities in a continuous manner as in Figure 1, G and decoder is optimal despite INI DL
0. The same occurs for the
J, the transition between these two different situations is reflected other examples shown in Figure 1, where INI DL
may be larger or
as a discontinuity in the representation of INI, resulting in a smaller than INI depending on the stimulus and response prob-
broken line. Whenever the classical NI decoder is optimal (i.e., abilities. Therefore, INIDL
does not constitute a limit to the ineffi-
INI is zero), noise correlations are irrelevant for decoding. The ciency of NI decoders.
converse, however, is not necessarily true. The classical NI de- Recently, Ince et al. (2010) proposed another alternative, es-
coder is only one among many ways of constructing a NI decoder. timating INI Min
as the information loss INILS
induced by a NI de-
Other NI decoders may be more efficient or even optimal. In the coder that associates each response with a list of stimuli ordered
latter case, noise correlations are irrelevant regardless of the value according to how likely they would be if neurons were noise
of INI. independent (criterion 8b; Fig. 1GL, blue line). In Figure 1, INI LS
For example, a NI decoder can be constructed just like the coincides with INI (the same occurs in any experiment involving
classical NI decoder but with the stimulus prior probabilities two stimuli) and thus exhibits the same drawbacks. In general,
P( S) differing from those P( S) set in the experiment (Oram et al., INI
LS
represents a tighter bound than INI (Ince et al., 2010).
1998). It may be puzzling to see that such a NI decoder con- However, as discussed in the next sections, it still overestimates
structed with unrealistic prior probabilities may operate more INI
Min
for reasons analogous to those of INI.
efficiently than the classical NI decoder constructed with the real The estimators mentioned above are based on probabilistic NI
priors. Indeed, altering an optimal decoder constructed with the decoders, that is, decoders that infer the stimulus from the prob-
real probabilities P(RS) cannot increase the information and ability of the population responses computed with the NI as-
might actually reduce it. However, there is no reason to believe sumption (Eq. 3). Previous studies have also proposed another
that, when altering a suboptimal decoder constructed with unre- alternative: to base the estimation of INI
Min
on decoders (generally
alistic (NI) probabilities, the information may not increase. Of linear) in which parameters are optimized for decoding surrogate
course, only carefully chosen alterations can do the job. In Figure NI population responses; that is, artificial responses that are gen-
1G, a classical NI decoder with P(S1) fixed at a value between 0 erated under the NI assumption (Nirenberg et al., 2001; Averbeck
and 0.5 is capable of decoding the stimulus without error (i.e., Eq. and Lee, 2006; Quian Quiroga and Panzeri, 2009; Berens et al.,
15 is never fulfilled) regardless of the true stimulus probabilities. 2012; see Materials and Methods and Fig. 1DF ). By comparing
In an attempt to avoid the arbitrariness of the choice of an NI these two alternatives, however, we find that they may lead to
decoder (Averbeck et al., 2006; Quian Quiroga and Panzeri, opposite conclusions: noise correlations may turn out to be irrel-
2009), Nirenberg et al. (2001) proposed measuring INI Min
as the evant in one of them and essential in the other (Fig. 2). Most
divergence INI between the probability distributions of the stim-
D
importantly, using the second approach, one may conclude that
ulus given the response computed with and without the NI as- noise correlations are essential even though neurons are actually
sumption (criterion 8c; Fig. 1GL, red line). This method aims at noise independent (Fig. 2B). At first glance, one might think of
estimating the information loss without decoding explicitly the this issue as an overestimation problem that, as in the case of INI,
population response, but unfortunately, it may severely overesti- could be avoided if one considered all possible decoding algo-
mate INI Min
. For the example shown in Figure 1A, INI D
becomes: rithms, not just the linear. Unfortunately, without constrain-
I NI
D
P M, M;S 2 log2 PNI S2 M, M, (16) ing the type of decoding algorithms and optimization criteria,
this estimator trivially underestimates I NI Min
and yields the
and thus INID
is always greater than zero, even though we showed conclusion that noise correlations are always irrelevant for
that the classical NI decoder is optimal for a wide range of stim- neural decoding.
ulus and response probabilities. The overestimation problem was To prove this, notice first that the set of surrogate NI popula-
first shown by Schneidman et al. (2003), who also showed that, tion responses {RNI} can be constructed by adding to the set of
strangely, INID
can even exceed the encoded information (Fig. population responses {R} all those responses that would occur
1H, top right). Indeed, if INID
is zero, then INI is zero and noise only if neurons were noise independent. Therefore, any decoding
correlations are unimportant, but the converse is not necessarily algorithm that maps {RNI} into the set of stimuli {S} can be writ-
true. Last, Ince et al. (2010) showed that, in rat somatosensory ten as follows:
Eyherabide and Samengo Noise Correlations in Neural Decoding J. Neurosci., November 6, 2013 33(45):1792117936 17925
A B the failure is due to the fact that the optimal decoder is not
searched among all possible NI decoders but, at best, among
subsets of limited size. However, the failure can be avoided by
extending the search to all possible NI decoders. To that end, in
this section, we first introduce the notion of canonical NI decod-
ers: the set of all decoders that comply with the NI assumption
(Eq. 3). Using a fundamental theorem in information theory, the
coding theorem, we then determine exactly the amount of infor-
mation lost by the best canonical NI decoder. This information
loss is smaller than the bounds analyzed in the previous section
(Fig. 1).
Figure 2. Comparison of different strategies to construct decoders that ignore noise corre- All probabilistic NI decoders, here called canonical NI
lations. Each panel shows the simultaneous responses of two neurons R1 and R2 elicited by two decoders (Eq. 4), can be described as a 2-stage process (Fig. 3).
stimuli S1 and S2. In both examples, stimuli and responses are equally likely. A, Linear decoders Without loss of generality, consider the population response
trained with surrogate NI population responses (dashed line) extract all the encoded informa- R [R1, . . . , RN] (where N is the number of neurons) elicited by a
tion, whereas no probabilistic NI decoder can do so. Specifically, a probabilistic NI decoder is stimulus Sk (1 k K, where K is the number of stimuli). In the first
inefficient for a range of probabilities P(R1,R2Sk) complying with the two following conditions: stage, the population response R is internally represented as a vector
P(2,2S1) 2 P(1,3S1) P(3,1S1) and P(3, 3S2) 2 P(2,4S2) P(4,2S2). B, Although neurons are R NIL of NI likelihoods (defined in Eq. 3), given by the following:
noise independent, no linear decoder is capable of extracting all of the encoded information.
RNIL PNI RS1 , . . ., PNI RSK . (19)
S Dec
f
f Dec
1
Dec
2
RNI if RNI {R}
RNI otherwise
(18) This step, and only this step, embodies the NI assumption. The
second stage represents the transformation of R NIL into the de-
Among all possible mappings from {RNI} to {S}, there always coded stimulus SNI and embodies the estimation criterion used to
exists at least one for which f1Dec coincides with an optimal crite- decode the stimulus.
rion to decode the population responses R. As stated by the data-processing inequality (see Materials and
Previous studies have hypothesized that the NI assumption Methods), each transformation in the sequence cannot increase
increases the number of real responses lying in the overlap be- the information about the stimulus and may potentially induce
tween the surrogate NI responses associated with different stim- an information loss. The information lost in each stage of the
uli (compare Fig. 1AC with Fig. 1DF ), thereby introducing decoding process can be determined using standard methods
ambiguity in the NI decoding process and losing information. previously developed for the analysis of neural codes (Borst and
This conclusion, however, is not always true. The decoding rule Theunissen, 1999; Panzeri et al., 2007; Eyherabide and Samengo,
may well evaluate the magnitude of each NI posterior probability 2010). In particular, the actual information loss INI induced by
(Eq. 6) and, with this information, always decode the same stim- canonical NI decoders can be separated as follows:
ulus as a decoder constructed with knowledge of noise correla- I NIEst
tions (Bishop, 2006). For example, in Figure 1A, the population
I NI I S;R IS;RNIL IS;RNIL IS;SNI .
response [R1,R2] [M, M] is only elicited by stimulus S2. How-
ever, if neurons were noise independent, [M, M] would be elic- I NINIL
ited by both S1 and S2 (Fig. 1D). Nevertheless, the classical NI (20)
decoder operates optimally for a wide range of stimulus and re-
sponse probabilities (Fig. 1G,J ). The presence of real responses in Here, INI NIL
is the information loss induced by the NI assumption
the overlap between the surrogate NI responses associated with (first stage) and INI Est
is the information loss induced by the esti-
each stimulus constitutes a necessary, but not a sufficient, condi- mation process (second stage).
tion for a NI decoder to be lossy. The NI assumption (first stage) is common to all NI decoders,
In summary, we have shown that, without appropriate con- and therefore INI NIL
constitutes a lower bound to the information
straints, previous approaches using surrogate NI population re- loss induced by all NI decoders. Mathematically, Equation 20
sponses may not be suitable for the analysis of the importance of decomposes the actual information loss INI into two non-
noise correlations in neural decoding. Other approaches based negative terms, thereby proving that INI NIL
is a lower bound of
on probabilistic NI decoders do not exhibit this problem because INI. Nevertheless, INI could still underestimate the minimum
NIL
the construction of the NI decoder is purely based on the NI information loss induced by all NI decoders. To prove that INI NIL
assumption. However, the estimators used in these approaches is tight, we need to prove that a NI decoder exists for which INI
tend to overestimate the minimum inefficiency of probabilistic coincides with INI NIL
, as we do next.
NI decoders in a context-dependent manner and none of them Different estimation criteria induce different information
constitutes a universal bound. Unfortunately, the overestimation losses INI Est
. However, the coding theorem (Shannon, 1948;
problem cannot be solved by simply taking the minimum estima- Cover and Thomas, 1991) demonstrates the existence of a decod-
tion, though this strategy is better than relying on one estimator ing procedure that, operating on long sequences of responses,
alone. In the next section, we show how to evaluate the exact can make INI Est
negligible, thus extracting all the information
NIL
value of the minimum information loss. I(S;R ) that is preserved after the NI assumption. Because the
NI assumption is common to all NI decoders, I(S;R NIL) consti-
Exact measure of the minimum information loss tutes the maximum amount of information that can be extracted
Previous estimators fail to tightly bound the minimum informa- by any canonical NI decoder, and the difference:
tion loss of NI decoders. The reasons for the failure depend on the
estimator, as shown in the previous section. In the case of INI, I NI
Min
I NI
NIL
I S;R IS;RNIL 0, (21)
17926 J. Neurosci., November 6, 2013 33(45):1792117936 Eyherabide and Samengo Noise Correlations in Neural Decoding
Samengo, 2010, and references therein), revealing what sort of represented as a vector of response features, each conveying in-
information about the stimulus is preserved or lost and which formation about specific stimulus features. Only some of those
response features encode such information. response features (and their information content) are preserved
After each transformation in the decoding process, two or after the NI assumption (first stage). The analysis of the preserved
more population responses whose distinction is informative may and lost features allows one to determine the effect of the NI
be represented in identical manner so their distinction is no lon- assumption on the neural code.
ger available for subsequent transformations. Whenever that With conditions 24a, 24b, and 24c in mind, we can provide an
happens, information is lost. To assess whether the distinction approximate estimation of the likelihood that correlations be rele-
between two (or more) population responses RA and RB is infor- vant. For discrete responses, noise correlations are often irrelevant
mative or constitutes noise, we first construct a representation R
because condition 24a is often violated. This condition requires that
that treats those responses as if they were the same, but keeps the at least two different responses, RA and RB, be mapped onto the same
distinction between all other responses and then we compare the vector, R NIL. This is unlikely, though, because the mapping from R
encoded information with and without the distinction (Eyhera- to R NIL goes from a discrete set to a continuous space (Eq. 19). For
bide and Samengo, 2010) continuous responses, the mapping from R to R NIL goes from a
continuous space of dimension N (where N is the number of neu-
I R3R IR;S IR
;S 0. (22)
rons) to a continuous space of dimension K (where K is the number
Whenever IR3R is zero, the distinction between RA and RB pro- of stimuli). Intuitively, one would therefore expect that correlations
vides no additional information and only constitutes noise. For would be relevant more often in the case of continuous responses
example, in Figure 4B, this is the case for population responses than in the case of discrete responses and that the importance of
[R1,R2] [ L, H] and [ H, L] or responses [ L, L] and [ H, H]. No- correlations should tend to increase with N and decrease with K.
tice that, when responses vary in a continuum, single responses This approximate estimation, however, should not be taken as
are typically associated with a probability density. In that case, a a hard rule. One can construct an infinite number of counterex-
representation R that treats two single responses RA and RB as amples in which noise correlations are crucial for discrete re-
equivalent induces no information loss. In the continuous case, sponses (Figs. 2, 4) or in which the importance of noise
an information loss occurs only when R treats a set of population correlations, for discrete and continuous responses, decreases
responses as equivalent and, in addition, the probability of the set with N and increases with K, exactly opposite to the approximate
is nonzero. estimation mentioned above. Nevertheless, one can prove that,
The condition for the distinction between responses to be for examples with a finite number of discrete responses, these
informative (Eq. 22) can also be written as a direct comparison counterexamples, though infinite in number, constitute a set of
between the real posterior probabilities P(SR). The distinction measure zero in the space of all possible examples, at least when
between two (or more) population responses RA and RB consti- using a counting measure (i.e., a measure that simply counts the
tutes noise if and only if: number of cases regardless of how likely they occur in nature;
Tao, 2011). Unfortunately, for examples with continuous re-
P S k RA PSk RB , (23) sponses or with an infinite number of discrete responses, such
for all stimuli Sk (this comes directly from Eq. 22). Otherwise, proof remains elusive.
their distinction is informative, and ignoring it induces an Furthermore, one should observe that, in experimental
information loss IR3R. Examples of informative variations and conditions, neither responses nor response probabilities can
noise when population responses are represented as continuous be measured with infinite precision. Experimental errors and
variables are given in the last section of Results. limited sampling may both produce broad distributions of
We can now formally state when and why noise correlations responses, each associated with distributions of response
are important in neural decoding. During the decoding process, probabilities. Mathematically, each population response R is
the population response R is first transformed into R NIL imme- associated not with a probability P(RS), as would occur if
diately after the NI assumption takes place (Fig. 3). This transfor- probabilities could be estimated with infinite precision, but
mation induces an information loss when (and only when) the with a distribution of probabilities Q[P(RS)] for each stimu-
three following conditions are fulfilled: lus S. The distribution Q[P(RS)] can be estimated either using
Bayesian approaches or resampling methods (Bishop, 2006;
(24a) The mapping R 3 R NIL is not injective, and, as a con-
Hastie et al., 2009). In other words, due to experimental errors
sequence, there are two or more responses {R1, . . . RQ}
and limited sampling, P(RS) becomes a random variable with
mapped onto the same R NIL,
probability Q[P(RS)]. Furthermore, the mapping from R to
(24b) Equation 23 is not fulfilled for at least two responses R NIL (Eq. 19) becomes a probabilistic one-to-many mapping
complying with condition 24a and one stimulus Sk, and as opposed to the deterministic one-to-one mapping that
(24c) The probability of the set of population responses ful- would be obtained if response probabilities were measured
filling both conditions 24a and 24b is nonzero. with infinite precision. The change in the nature of the map-
Noise correlations are important for decoding specifically ping R 3 R NIL should be taken into account when evaluating
those responses satisfying these three conditions: the cause of the I NI
NIL
, resulting in a distribution of information losses rather
loss (the why) relies on the fact that the NI assumption no than in a unique deterministic value. Moreover, the equalities
longer allows the decoder to take into account the differences in in conditions 24a and 24b should be interpreted in statistical
their information content. The losses can be linked to specific terms (i.e., equalities should be assessed through hypothesis
stimulus and response features by interpreting R NIL as a reduced testing). In these circumstances, NI decoders become lossy
representation of the population response (Eyherabide and more frequently than predicted by the approximate prediction
Samengo, 2010). In such a paradigm, population responses are above. Therefore, the relevance of correlations and the cer-
17928 J. Neurosci., November 6, 2013 33(45):1792117936 Eyherabide and Samengo Noise Correlations in Neural Decoding
Figure 5. The impact of the choice of a NI decoder. A, The classical NI decoder is modeled as a three-stage process involving: the NI assumption (first stage), in which the population
response is transformed into a vector of NI likelihoods (R NIL); Bayes rule (second stage), in which R NIL is transformed into a vector of NI posteriors (R NIP); and the estimation criterion
(third stage), in which the decoded stimulus is inferred from R NIP. Each stage may induce an information loss. B, C, Population responses of Figure 4, A and C, decoded with the classical
NI decoder. B, Both R NIL and R NIP keep responses elicited by different stimuli segregated and therefore all information is preserved. However, the stimulus cannot always be correctly
inferred by simply choosing the one corresponding to the maximum NI posterior (argmax criterion, dotted line, and arrows; right). Nevertheless, there is an estimation criterion capable
of correctly estimating the stimulus (continuous lines and arrows; right). C, Although the NI assumption preserves all the encoded information, after Bayes rule, responses associated
with different stimuli are merged, and thus some (but not all) information is lost (I NINIB
is greater than zero). As a result, no estimation criterion is capable of perfectly decoding the
stimulus. However, other NI decoders may still be optimal for decoding (Fig. 4C).
R 3 RNIL 3 RNIP A B
L, M 3 0.25, 0 3 1, 0
M, L 3 0.25, 0 3 1, 0
M, M 3 0.25, 0.25 3 0.75, 0.25
H, H 3 0, 0.25 3 0, 1
These transformations are also shown in Figure 5B. The first stage
only merges the population responses [ L, M] and [ M, L], but
their distinction only constitutes noise (IR3 RNIL is zero; Eq. 22),
and thus no information is lost (i.e., I NINIL
is zero). The second
stage is an injective mapping so it also does not affect the decoded Figure 6. Difference between assessing the role of noise correlations using mutual informa-
information ( INI NIB
is zero). tion or decoding error. The population response shown in Figure 5C is decoded using the clas-
Using Equations 19 and 25 in an analogous manner, we can sical NI decoder. Response probabilities are set according to P( H, LS1) P( L, LS2). For
generalize the previous results to arbitrary values of the stim- different stimulus probabilities P(S1), A shows the variation of the minimum information loss
ulus and response probabilities: Responses associated with ININIP
relative to the encoded information I(R;S), and B shows the variation of the increment in
different stimuli are always represented in a different manner, the minimum decoding error NI NIP
relative to the minimum decoding error Min(R;S). The
both after the first stage and after the second stage. As a result, decoding error is here measured as decoding-error probability. The curves for P(S1) p are
any information loss (Fig. 1G,J ) is due to the estimation cri- identical to the curves for P(S1) 1 p (0 p 1). A, Unlike the case shown in Figure 4B,
terion. Nevertheless, among all possible estimation criteria, here, information is only partially lost and the loss depends on the stimulus and response
probabilities. The maximum loss, however, only occurs when P( H, LS1) reaches 0.5 regardless
there is at least one capable of extracting all the information
of the stimulus probability. B, Unlike INI NIP
, NINIP
approaches its maximum value when
remaining in R NIP, which is equal to the encoded information. P( H, LS1) is greater or equal to P(S1).
This optimal estimation criterion, however, differs from the
maximum-posterior criterion (Fig. 5B, right).
Another example is analyzed in Figure 5C (previously ana-
lyzed in Fig. 4C), in which losses are distributed differently mation must be based purely on the NI assumption (i.e., on
throughout the decoding process. Stimuli are equally likely and R NIL).
both P( H, LS1) and P( H, HS2) are set to 0.66. Throughout the NI Using Equations 21, 23, and 26 we can now prove why esti-
decoding process, the population responses R [R1,R2] are first mators based on INI (criterion 8a) and I NI
LS
(criterion 8b) over-
transformed (recall Eqs. 19 and 25) as follows: estimate the minimum information loss INI Min
. Both methods
have two occasions to include unnecessary losses. The first occa-
R 3 RNIL 3 RNIP sion appears when transforming R NIL into R NIP (Bayes rule).
L, L 3 0.22, 0.11 3 0.66, 0.33 When neurons are truly noise independent, INI NIL
and ININIP
co-
L, H 3 0.11, 0.22 3 0.33, 0.66 incide; otherwise, as a result of Bayes rule, some responses for
H, L 3 0.44, 0.22 3 0.66, 0.33 which a distinction is informative may be merged (Fig. 5C), and
H, H 3 0.22, 0.44 3 0.33, 0.66 therefore I NI
NIP
I NINIL
. The second occasion appears when
NIP
passing from R to the decoded stimulus SNI. In the case of
These transformations are also shown in Figure 5C. The first INI, the estimation criterion usually coincides with the maxi-
transformation merges no population responses. Therefore, the mum posterior, which, as shown above, may be suboptimal when
distinction between population responses is preserved after the used under the NI assumption. In the case of I NI LS
, R NIP is trans-
NI assumption and I NI NIL
is zero. The second transformation, formed into a ranking of stimuli. This stage, although more finely
however, merges response [ L, H] with [ H, H] and [ H, L] with grained than the purely maximum-posterior criterion, may still
[ L, L]. Unlike R NIL, the representation R NIP carries less informa- lump distinct representations R NIP into one single ranking and
tion about the stimulus than the encoded information and INI NIB
thereby perhaps lose information.
is greater than zero (Fig. 6A). Therefore, although there exists a
canonical NI decoder capable of decoding without error (Fig. Minimum decoding error
4C), classical NI decoders are unable to extract all of the informa- The analysis of the effects of ignoring noise correlations in
tion preserved after the NI assumption. neural decoding was here performed using mutual informa-
For other values of stimulus and response probabilities, al- tion (Cover and Thomas, 1991; Borst and Theunissen, 1999).
most always a canonical NI decoder exists capable of decoding This quantity sets a limit to the decoding performance (e.g., in
without error (Fig. 4C), except in the isolated case shown in Fig- the number of stimulus categories that can be distinguished
ure 4B. However, classical NI decoders may still be incapable of with negligible error probability), but this limit may only be
extracting all the information preserved after the NI assumption. achievable when decoding long sequences of population re-
Unlike R NIL, population responses associated with different sponses (i.e., comprising several consecutive population re-
stimuli are not always mapped after Bayes rule onto different sponses, also known as block coding; Cover and Thomas,
R NIP. There are two cases in which responses are merged: (1) 1991). Long sequences of responses inevitably have a long
when P( L, HS1) P( H, HS2), in which case response [ L, H] is duration. To produce timely behavioral reactions, however,
merged with [ L, L] and [ H, L] with [ H, H]; and (2) when neural systems must process information in short time win-
P( L, HS1) P( L, LS2), in which case response [ L, H] is merged dows (tens or hundreds of milliseconds) (Hari and Kujala,
with [ H, H] and [ H, L] with [ L, L] (this case is shown in Fig. 5C). 2009; Panzeri et al., 2010). Long sequences of responses may
Therefore, the representation R NIP carries less information about therefore be inconsistent with the fast behavioral responses
the stimulus than the encoded information and INI NIB
is greater observed in nature. In addition, mutual information may not
than zero (Fig. 6A). These cases constitute examples in which NI adequately represent the cost of wrongly estimating the stim-
decoders can be optimal, but for achieving optimality, the esti- ulus (Nirenberg and Latham, 2003). To overcome these issues,
Eyherabide and Samengo Noise Correlations in Neural Decoding J. Neurosci., November 6, 2013 33(45):1792117936 17931
in this section, we also bound the inefficiency of NI decoders Similarly to Equation 26, the increment in the decoding error in-
using the minimum decoding error. duced by classical NI decoders (Fig. 5A) can be written as follows:
The minimum decoding error MinS;R) (Eq. 11) can be defined
Min S;RNIP Min S;RNIL NI Min S;RNIP
using different cost functions (Simoncelli, 2009), allowing one to assess
the importance of noise correlations when decoding is performed on a NI NIL
NI NI
NIB NI
Est ,
single-response basis. Like mutual information, it is non-negative and NI
NIP
E
E
P R R
PR
min
S Dec
E S, SDec
P S R
(28b) R3R Min R
; S Min R; S 0.
E min
PR
S Dec
E
PSR
P R R
E S, SDec (28c)
not increment the minimum decoding error and can be safely
ignored. Otherwise, R3R shows the increment in the minimum
decoding error due to ignoring their distinction.
Min R
; S. (28d) As an example, consider the population response shown in
Figure 4, B and C. When responses are equally likely (Fig. 4B),
Because of these similarities, the mathematical framework and NI
NIL
reaches its maximum value as follows:
the interpretations obtained from the minimum decoding error
are almost identical to those of mutual information, taking care NI
NIL
min E S,SDec . (34)
S Dec PS
of the change in the sign of the data-processing inequality (as
shown below). However, when applied to experimental data, the Indeed, after the NI assumption, all population responses are repre-
results obtained using mutual information or minimum decod- sented in the same way and the NI decoder performs at chance level.
ing error may differ both quantitatively and qualitatively depend- However, in all cases in which responses are not equally likely (Fig.
ing on the case under study (as shown in the next section). 4C), NINIL
is zero. In other words, if responses are not equally likely,
The increment in the decoding error NI induced by a ca- a canonical NI decoder exists that is capable of decoding the stimulus
nonical NI decoder (Fig. 3) with respect to the minimum decod- with the same accuracy as if noise correlations were taken into ac-
ing error Min R;S (that would be achievable if noise correlations count (such a decoder is shown in Fig. 4C). Classical NI decoders
were taken into account) is given as follows: operate substantially worse (Fig. 5C) because population responses
become indistinguishable before the estimation process and thus
NI NI Min R; S, (29)
perform at chance level for a wide range of response probabilities
where NI is the actual decoding error induced by the specific (Fig. 6B). Even though some information still remains in R NIP (Fig.
implementation of the canonical NI decoder. Analogously to 6A), it cannot be extracted using single responses.
Equation 20, NI can be separated as follows: Although, in general, INI NIL
and NINIL
are not related determinis-
tically (Thomson and Kristan, 2005), here we show some useful
NI Min S;RNIL
relations between these two quantities in specific cases as follows:
NI NIL
NI NI
Est
. (30)
Min S;RNIL Min S;R
(35a) If INI
NIL
is zero, then NI NIL
is zero. Proof: If INI
NIL
0,
then I(S;RR ) 0 and hence R can be written as a
NIL
NI
NIL
and NI
Est
represent the increment in the minimum decod- transformation (deterministic or stochastic) of RNIL.
ing error induced by the NI assumption and the estimation cri- Following the data-processing inequality (Eq. 27),
terion, respectively. Among all mappings between R NIL and the NINIL
0.
decoded stimulus SNI, there is one for which: (35b) Corollary: if NI NIL
0, then ININIL
0.
S P S RNIL
ity ensures that NI NIL
0. In the absence of errors, such
Such a decoder induces no additional increment in the minimum de- NI decoder extracts all the encoded information, and
coding error (Eq. 12). Therefore, NINIL
constitutes the minimum incre- thus INI NIL
0.
ment in the minimum decoding error attainable by at least one (35d) If INI is equal to the encoded information I(S;R),
NIL
canonical NI decoder (i.e., those purely based on the NI assumption). then Min(RNIL,S) is given by Equation 34. Proof: In this
Whenever NI NIL
is zero, the NI assumption does not increase the mini- case, S and R NIL are independent. The result follows
mum decoding error and noise correlations can be safely ignored. from introducing independence into Equation 11.
17932 J. Neurosci., November 6, 2013 33(45):1792117936 Eyherabide and Samengo Noise Correlations in Neural Decoding
A B C
D E F
Figure 7. Impact of ignoring noise correlations when decoding population responses with Gaussian distributions. In both examples, black and gray ellipses represent the contour curves of the
response distributions elicited by two stimuli, S1 and S2, respectively. Stimuli are equally likely. Response parameters are as follows: nk 5 (1)k (AC) and nk 5 2
(1 n)k (DF ); nk 1, 1 0.5, and 2 0.5 (Eq. 42). A, D, Optimal decoding strategy (minimization of the decoding-error probability) when correlations are known (Eq. 13). White regions
are decoded as S1 and gray regions as S2. B, E, Optimal decoding strategy when correlations are ignored (Eq. 31). The arrows show the transformation of the population response R throughout the
decoding process, as described in Figure 3. Population responses (left) are first transformed into vectors of NI likelihoods R NIL (Eq. 19; middle). The distinction between regions filled with different
gray levels is preserved. B, Optimal NI decoder maps the white region in the middle panel onto S1 and the gray region onto S2, thus decoding population responses in the same way as in A. E, The first
transformation merges population responses that are decoded differently in D (compare the regions), thus incrementing the minimum decoding error. The optimal NI decoder maps white and
light-gray regions in the middle panel onto S1 and gray and dark-gray regions onto S2. In both B and E, the optimal estimation criterion differs from the maximum likelihood (or maximum posterior)
criterion, which maps regions above and below the diagonal (middle, dashed line) into S2 and S1, respectively. C, F, Analysis of the responses R that can be merged, after the NI assumption, without
information loss. Left, Contour curves of the NI likelihoods PNI(RS1) (black) and PNI(RS2) (gray). Black dots represent two responses that, for each stimulus, have the same NI likelihoods (and are thus
mapped onto the same R NIL). Right, Contour curves of P(S1R) (black) and P(S2R) (gray) passing through the responses denoted in the left panel (continuous and dashed lines, respectively). C, The
NI assumption merges pairs of responses R that are symmetric with respect to the diagonal (left, dashed line). Because these pairs also have the same posterior probabilities (right), Equation 23 is
fulfilled and no information is lost. F, The NI assumption merges pairs of responses R that are symmetric with respect to the line R1 5 (left, dashed line). These pairs have different posterior
probabilities (right). Therefore, Equation 23 is not fulfilled and some information is lost.
The analysis can also be generalized to other measures of section, we apply our theoretical framework to continuum exten-
transmitted information, such as those defined by Victor and sions of the discrete examples mentioned above and, on the way,
Nirenberg (2008), with the condition that they comply with the compare what can be learned about the role of noise correlations
data-processing inequality (Cover and Thomas, 1991). More- when using the minimum information loss (INI NIL
) and the min-
over, it can be extended to other probabilistic mismatched decod- imum increment in the minimum decoding error (NI NIL
).
ers, that is, to decoders constructed using stimulus-response Consider the two examples shown in Figure 7, in which re-
probability distributions that differ from the real ones (Quian sponses to each stimulus are drawn from Gaussian distributions.
Quiroga and Panzeri, 2009; Oizumi et al., 2010). One simply Similar examples have been previously studied assuming equal
replaces the NI likelihoods with those corresponding to the prob- variance and correlations among neurons (Sompolinsky et al.,
abilistic model under consideration. Decoding in the real brain 2001; Wu et al., 2001; Averbeck and Lee, 2006; Averbeck et al.,
may be subjected to additional constraints imposed by biophys- 2006). These studies, however, have also pointed out that those
ics, connection length, metabolic cost, or robustness, which can-
highly homogeneous examples are rather unlikely in nature.
not simply be represented by a generalized measure of
More biologically driven examples should include differences in
information. Our approach can be extended to these cases by
variances, in correlations, and in the symmetry of the distribution
computing the difference between the information transmitted
of responses among neurons as observed, for example, in the
by the optimal NI decoder that additionally satisfies these con-
straints with that of optimal decoders constructed with knowl- monkey MT area (Huang and Lisberger, 2009) and V1 area
edge of noise correlations that operate under the same (Kohn and Smith, 2005). These differences, even if small, may
constraints. change dramatically the role of noise correlations in neural decoding
(Fig. 1). With these ideas in mind, we have here chosen two examples
Role of noise correlations in biologically plausible models in which responses elicited by stimulus S1 are negatively correlated
So far, we have discussed the role of noise correlations in exam- whereas responses elicited by S2 are positively correlated.
ples in which the response space is discrete. In those examples, the The example of Figure 7AC is an extension to the continuum of
minimum information loss and the minimum decoding error the case studied in Figures 1A, 4A, and 5B. In what follows, we first
lead to the same conclusions about the role of noise correlations. show that, for the particular values of the parameters used in Figure
However, more interesting models from the biological point of 7AC, noise correlations are irrelevant in neural decoding, a conclu-
view generally involve responses varying in a continuum. In this sion that previous estimators failed to reveal. However, later we show
Eyherabide and Samengo Noise Correlations in Neural Decoding J. Neurosci., November 6, 2013 33(45):1792117936 17933
that the irrelevance of noise correlations is a direct consequence of Min RNIL,S) 7.847%
the specific values of the parameters used in this example and that, Min R,S 7.834% (40)
contrary to the discrete case, noise correlations are almost always
important for arbitrary values of the parameters. Their difference (which equals NI NIL
; see Eq. 30) represents only
For the example shown in Figure 7AC, the estimations of the 0.166% of (R,S), indicating that noise correlations are almost
Min
posterior probabilities and thus comply with Equation 23. There- where P(SiR NIL) is proportional to the sum of all joint probabil-
fore, no information is lost after the NI assumption. In other ities P(R,S) whose response R is mapped onto R NIL. Although, in
words, noise correlations can be safely ignored. general, population responses mapped onto regions above and
Analogously, NINIL
is zero, and thus the NI assumption does not below the dashed diagonal are decoded as S2 and S1, respectively,
increment the minimum decoding error. To see this, compare Fig- for some regions near the origin of coordinates, the situation is
ure 7, A and B, in which we show the performance of optimal decod- reversed.
ers with and without knowledge of correlations, respectively. We can now generalize the results to arbitrary Gaussian dis-
Population responses associated with different decoded stimuli in tributions and stimulus probabilities, following the same reason-
Figure 7A are never merged after the NI assumption (Fig. 7B, mid- ing as in Figure 7, C and F. Consider that the responses of two
dle). Therefore, the same mapping from population responses to neurons R1 and R2 elicited by two stimuli S1 and S2 have a Gauss-
decoded stimuli can be constructed even after the NI assumption ian distribution (Bishop, 2006) given as follows:
takes place. Notice, nevertheless, that the optimal decision boundary
based on the NI likelihoods is curved and differs from the
maximum-likelihood criterion (dashed diagonal) or maximum-
posterior criterion, which coincides with the maximum-likelihood
2
R1 1k 1k
P R Sk R , ,
2k
2
k
k 2k
2 , (42)
criterion when stimuli are equally likely. where nk, k, and nk represent the mean values, correlation
The situation is different in the example shown in Figure 7, coefficients, and standard deviations, respectively, of the re-
DF. Here, the estimations of INI Min
are as follows: sponses of the n th neuron to stimulus Sk, and k k 1k 2k .
Noise correlations are almost always important for decoding ex-
I NI 19.16% I NI
D
3.22% cept when the following conditions are met:
(38)
I NI
LS
19.16% I NI
DL
3.22%
12 22 2 1 12 if 11 12 ,
, and 21 22 ; (43a)
whereas 11 21 1 1 22
I NI
NIL
1.32%, (39) if 11 12 ,
12 22 2 1 12
, or 21 22 ; (43b)
indicating that previous estimations overestimated the impor- 11 21 1 1 22
tance of noise correlations, although noise correlations are not
completely irrelevant. Notice that, as in the previous example, 21 22 11 12 if 11 12 ,
INI (or INI
LS
) is far greater than INI
NIL
(and also greater than INI
DL , and 21 22 ; (43c)
11 12 21 22
and INI ). These results indicate that the maximum rate of stim-
D
uli that can be processed without decoding errors decreases in Conditions 43a, 43b, and 43c establish relations between the
1% after ignoring noise correlations. One may wonder how the mean values nk, correlation coefficients k, and standard devia-
performance of the NI decoder is affected when decoding single tions nk of the responses of the n th neuron to stimulus Sk, re-
population responses, a situation in which response sequences spectively. Conditions 43a and 43b hold only when population
are short and decoding errors are allowed. responses always exhibit the same type of correlations for all stim-
To answer this question, we determine the minimum decod- uli (i.e., they are always positively correlated or always negatively
ing error (measured as the error probability; see Eq. 10 in Mate- correlated). Condition 43b also requires that all contour curves of
rials and Methods) with and without the NI assumption as the NI response distributions are shifted and/or scaled versions of
follows: one another (but not rotated). Finally, condition 43c analogously
17934 J. Neurosci., November 6, 2013 33(45):1792117936 Eyherabide and Samengo Noise Correlations in Neural Decoding
constrains the shape of the contour curves, but holds for arbitrary
correlation coefficients. Notice the change in the subindexes A B
from condition 43b to condition 43c. For any departure from
conditions 43a to 43c, noise correlations are important for de-
coding: Both ININIL
and NINIL
are greater than zero; their values
depend on the specific case under study, and can range from 0
to 100% (e.g., when condition 43a holds and variances are equal).
Discussion
In neural decoding, the importance of noise correlations has been
linked to the minimum inefficiency of NI decoders. These decod-
Figure 8. The NI decoder can paradoxically extract more information than that encoded by
ers have been constructed using two different methods. The first
individual neurons. A, Example showing the responses of two neurons R1 and R2 elicited by
one involves training specific types of decoders (generally linear) three stimuli S1, S2, and S3. All stimuli are equally likely. Response probabilities P( L, LS3),
using surrogate NI responses (Nirenberg et al., 2001; Latham and P( M, MS1), and P( H, HS2) are equal to and response probabilities P( L, MS1), P( M, LS1),
Nirenberg, 2005; Quian Quiroga and Panzeri, 2009; Berens et al., P( L, HS3), P( H, LS3), P( M, HS2), and P( H, MS2) are equal to 0.5 /2, with varying
2012). Here, we showed that the inefficiency of these decoders between 0 and 1. B, The NI decoder is capable of extracting more information than the sum of
may, depending on the decoding models and optimization func- the information encoded by individual neurons for a wide range of response probabilities. This
tions, overestimate or underestimate the importance of noise effect is enhanced by the fact that the latter information is only an upper bound of the informa-
correlations, and may not even be related to the NI assumption tion conveyed individually by the neurons in the population.
(Fig. 2). Therefore, the results obtained with this method ought
to be observed with caution. The second method involves prob- neural codes (Montemurro et al., 2007; Panzeri et al., 2007; Ey-
abilistic decoders that explicitly take the NI assumption as part of herabide and Samengo, 2010).
the decoding algorithm (Nirenberg et al., 2001; Wu et al., 2001; The interpretation of NI decoders as sequences of processes is
Nirenberg and Latham, 2003; Latham and Nirenberg, 2005; Ince fundamental to understanding the role of noise correlations in
et al., 2010; Oizumi et al., 2010); the consistency with the NI neural decoding. Using this paradigm, we studied the effect of the
assumption is therefore guaranteed (Nirenberg and Latham, NI assumption on later stages of the NI decoder. After the NI
2003). assumption, the application of Bayes rule may give rise to addi-
The inefficiency of probabilistic NI decoders (hereafter called tional information losses (Fig. 5C). Interestingly, information
NI decoders) has been previously assessed either by measuring losses may be reduced by using different stimulus prior probabil-
the information preserved in their output (INI and INI LS
) ; Ni- ities than those set in the experiment (Fig. 1G). However, when
renberg et al., 2001; Ince et al., 2010) or by using information applied to the true conditional response probabilities (as opposed
theoretical quantities (INI D
and INIDL
; Nirenberg et al., 2001; Ni- to the NI response probabilities), Bayes rule induces no informa-
renberg and Latham, 2003; Latham and Nirenberg, 2005; Oizumi tion loss. These results stress the remarkable differences (often
et al., 2010). Here, we compared these estimators for a wide range overlooked) between decoding algorithms constructed with the
of population responses and probability distributions. We found real population-response probabilities and mismatched decoders
that none of them bound the inefficiency of all NI decoders (Oizumi et al., 2010).
tightly (Figs. 1, 7) and all of them overestimate the importance of Most importantly, we determined when and why informa-
noise correlations. Therefore, previous studies concluding that tion is lost by NI decoders in a single-response basis: informa-
noise correlations are important based on these estimators may tion losses occur because some population responses that are
require a second evaluation with the methods presented here. informative are transformed in such a way that their distinc-
Previous studies have claimed that NI decoders inferring the tion is unavailable for subsequent stages. To identify which
stimulus from a maximum-posterior criterion (Eq. 7) are opti- distinctions are informative and which ones constitute noise,
mal. When operating with the true response probabilities, this we do not rely on previous definitions of noise as mere varia-
criterion minimizes the decoding-error probability (Eq. 13). tions around a mean (Oram et al., 1998; Sompolinsky et al.,
However, neither other definitions of decoding error (Eq. 10) nor 2001; Averbeck et al., 2006). Instead, our definition of noise
the information loss (Eq. 2) are guaranteed to be minimized. explicitly evaluates the role of response variations in informa-
When operating with NI response probabilities, not even the tion transmission. Certainly, some variations around the
minimization of the decoding-error probability is guaranteed mean are essential to information transmission even in the
(unless INID
is zero; Nirenberg and Latham, 2003). As shown in absence of noise correlations (Fig. 2B).
Figures 5 and 7, using a maximum-posterior criterion may result By analyzing the importance of noise correlations on a
in overestimating the minimum inefficiency of NI decoders and single-response basis, we found that, in broad terms, their
therefore the importance of noise correlations. importance depends on the relation between the number of
To solve this problem, we first modeled NI decoders as series stimuli ( K) and the number of neurons ( N) and that, in gen-
of transformations of the population response, with only the first eral, noise correlations are likely to be irrelevant. This approx-
one embodying the NI assumption and the following ones repre- imate picture may explain why previous studies using
senting the estimation criterion (Figs. 3, 5A). We then noticed different values of N and K often differed in the relevance
that the information loss INI NIL
induced by the first transforma- ascribed to noise correlations. Moreover, it may aid the design
tion is common to all NI decoders and, with no restrictions on the of future experiments for which e outcomes depend on the
stimulus estimation algorithms, constitutes an attainable lower importance of noise correlations. To get an accurate assess-
bound to the lost information (the coding theorem; Cover and ment of the importance of noise correlations in each individ-
Thomas, 1991). The computation of INI NIL
(and also its variance ual case, however, one should rely on I NI NIL
and not on the
and bias) can be done using standard tools for the analysis of approximate argument mentioned above.
Eyherabide and Samengo Noise Correlations in Neural Decoding J. Neurosci., November 6, 2013 33(45):1792117936 17935
The role of noise correlations in neural decoding, however, sponses). This assessment is fundamental for the development of
cannot be completely characterized with quantities solely based computational algorithms, brain-machine interfaces, and neuro-
on mutual information because mutual information takes into prosthetics. Our description provides the basis for understanding
account neither temporal constraints of real neural systems nor how the NI assumption (or any other assumption during the
behavioral meaning of stimuli (Nirenberg and Latham, 2003). decoding process) affects the amount and type of decoded infor-
When operating on single responses, higher or lower decoded mation, establishing for the first time a link between probabilistic
information may not be reflected directly in the efficiency of a decoding models and the neural code. The framework is general
decoder (Thomson and Kristan, 2005). Here, we proposed to enough to analyze the importance of noise correlations not only
additionally assess the role of noise correlations using quantities between neurons in neural populations, but also between neural
based on the minimum decoding error. This quantity exhibits populations in different cortical areas or, more recently, between
many similarities with mutual information and, in addition, can cortical areas in different brains (Hari and Kujala, 2009; Babiloni
handle short time windows and reflect the biological cost of mak- and Astolfi, 2012).
ing specific decoding errors.
Our results contrast with previous studies in three major Notes
points. First, we assess the role of correlations using INI NIL
and Supplemental material for this article is available at http://eyherabidehg.
NI without explicitly displaying the best NI decoders (i.e., NI
NIL com.ar/. This material includes additional examples and demonstrations
decoders that minimize I NI NIL
or NI
NIL
). Our formulation has and the codes for making the figures. This material has not been peer
reviewed.
both benefits and limitations. The benefits are that we can draw
conclusions about the importance of correlations with minimal
computational cost. The limitation is that, if we do actually need
References
Averbeck BB, Lee D (2006) Effects of noise correlations on information
to use a decoder, we have no explicit formula describing the best encoding and decoding. J Neurophysiol 95:36333644. CrossRef Medline
ones. To our knowledge, an explicit formula only exists for the NI Averbeck BB, Latham PE, Pouget A (2006) Neural correlations, population
decoder that minimizes the decoding error (Eq. 31), providing coding and computation. Nat Rev Neurosci 7:358 366. CrossRef
that correlations are known. Nevertheless, one should remember Medline
Babiloni F, Astolfi L (2012) Social neuroscience and hyperscanning tech-
that minimizing the decoding error does not translate into max-
niques: past, present and future. Neurosci Biobehav Rev. Advance online
imizing the decoded information (Treves, 1997; Thomson and publication. doi:10.1016/j.neubiorev.2012.07.006 CrossRef
Kristan, 2005). In general, the best NI decoders must be found by Berens P, Ecker AS, Cotton RJ, Ma WJ, Bethge M, Tolias AS (2012) A fast
searching among all possible NI decoders. To aid the search, our and simple population code for orientation in primate V1. J Neurosci
analysis provides insight into which distinctions between popu- 32:10618 10626. CrossRef Medline
lation responses must be preserved throughout the decoding pro- Bishop CM (2006) Pattern recognition and machine learning. New York:
Springer.
cess to achieve optimality. Borst A, Theunissen FE (1999) Information theory and neural coding. Nat
Second, previous studies have argued that decoding strategies Neurosci 2:947957. CrossRef Medline
that ignore noise correlations are simpler than those taking noise Cover TM, Thomas JA (1991) Elements of information theory. New York:
correlations into account (Nirenberg et al., 2001; Wu et al., 2001; Wiley-Interscience.
Nirenberg and Latham, 2003; Latham and Nirenberg, 2005; Duda RO, Hart PE, Stork DG (2000) Pattern classification, Ed 2. New York:
Wiley.
Averbeck and Lee, 2006; Averbeck et al., 2006; Ince et al., 2010;
Eyherabide HG, Samengo I (2010) Time and category information in
Oizumi et al., 2010). Even though the NI assumption simplifies pattern-based codes. Front Comput Neurosci 4:145. CrossRef Medline
the probabilistic encoding model, optimal NI decoders may re- Gawne TJ, Richmond BJ (1993) How independent are the messages carried
quire more complex estimation algorithms than those used in by adjacent inferior temporal cortical neurons? J Neurosci 73:2758 2771.
decoders constructed without the NI assumption. Other estima- Medline
tion algorithms may be simpler but less efficient (Fig. 7). Graf AB, Kohn A, Jazayeri M, Movshon JA (2011) Decoding the activity of
neuronal populations in macaque primary visual cortex. Nat Neurosci
Third, our results do not support directly any qualitative claim
14:239 245. CrossRef Medline
about the nature of the decoded information (Nirenberg et al., Hari R, Kujala MV (2009) Brain basis of human social interaction: From
2001). The amount of information extracted by NI decoders may concepts to brain imaging. Physiol Rev 89:453 479. CrossRef Medline
paradoxically depend on the noise correlations in the population Hastie T, Tibshirani R, Friedman J (2009) The elements of statistical learn-
response. For example, in Figure 7, the amount of information ing: data mining, inference and prediction, Ed 2. New York: Springer.
extracted by the optimal NI decoder does depend on the amount Huang X, Lisberger SG (2009) Noise correlations in cortical MT area and
their potential impact on trial-by-trial variation in the direction and speed
of correlation in the neural response. The extracted informa- of smooth-pursuit eye movements. J Neurophysiol 101:30123030.
tion may even exceed the sum of the informations encoded indi- CrossRef Medline
vidually by each neuron (Schneidman et al., 2003) regardless of Ince RA, Senatore R, Arabzadeh E, Montani F, Diamond ME, Panzeri S
whether or not surrogate NI population responses occur in the (2010) Information-theoretic methods for studying population codes.
real data (Fig. 8). The solution to this paradox goes beyond quan- Neural Netw 23:713727. CrossRef Medline
titative arguments and requires a comparison of what sort of Klam F, Zemel RS, Pouget A (2008) Population coding with motion energy
filters: the impact of correlations. Neural Comput 20:146 175. CrossRef
information (stimulus features) is individually encoded by each Medline
neuron with that extracted by NI decoders (Eyherabide and Knill DC, Pouget A (2004) The bayesian brain: the role of uncertainty in
Samengo, 2010 and references therein). neural coding and computation. Trends Neurosci 27:712719. CrossRef
To conclude, our work provides a rigorous framework for Medline
understanding, both quantitatively and qualitatively, the role of Kohn A, Smith MA (2005) Stimulus dependence of neuronal correlation in
primary visual cortex of the macaque. J Neurosci 25:36613673. CrossRef
noise correlations in neural decoding. The quantities defined
Medline
here allow one to quantify exactly the trade-off between the com- Latham PE, Nirenberg S (2005) Synergy, redundancy, and independence in
plexity and optimality of NI decoders, either in natural situations population codes, revisited. J Neurosci 25:51955206. CrossRef Medline
or under artificial conditions (i.e., using long sequences of re- Montemurro MA, Senatore R, Panzeri S (2007) Tight data-robust bounds
17936 J. Neurosci., November 6, 2013 33(45):1792117936 Eyherabide and Samengo Noise Correlations in Neural Decoding
to mutual information combining shuffling and model selection tech- populations: information theory and decoding approaches. Nat Rev Neu-
niques. Neural Comput 19:29132957. CrossRef Medline rosci 10:173185. CrossRef Medline
Nirenberg S, Latham PE (2003) Decoding neuronal spike trains: How im- Schneidman E, Bialek W, Berry MJ 2nd (2003) Synergy, redundancy, and
portant are correlations. Proc Natl Acad Sci U S A 100:7348 7353. independence in population codes. J Neurosci 23:11539 11553. Medline
CrossRef Medline Shannon CE (1948) A mathematical theory of communication. The Bell
Nirenberg S, Carcieri SM, Jacobs AL, Latham PE (2001) Retinal ganglion System Technical Journal 27:379 423. CrossRef
cells act largely as independent decoders. Nature 411:698 701. CrossRef Simoncelli EP (2009) Optimal estimation in sensory systems. In: The cog-
Medline nitive neurosciences, Ed 4 (Gazzaniga MS), pp 525538. Cambridge, MA:
Oizumi M, Ishii T, Ishibashi K, Hosoya T, Okada M (2010) Mismatched MIT.
decoding in the brain. J Neurosci 30:4815 4826. CrossRef Medline Sompolinsky H, Yoon H, Kang K, Shamir M (2001) Population coding in
Oram MW, Foldiak P, Perrett DI, Sengpiel F (1998) The ideal homuncu- neuronal systems with correlated noise. Phys Rev E Stat Nonlin Soft Mat-
lus: decoding neural population signals. Trends Neurosci 21:259 265. ter Phys 64:051904. CrossRef Medline
CrossRef Medline Tao T (2011) An introduction to measure theory. Providence, RI: American
Panzeri S, Senatore R, Montemurro MA, Petersen RS (2007) Correcting for Mathematical Society.
the sampling bias problem in spike train information measures. J Neuro- Thomson EE, Kristan WB (2005) Quantifying stimulus discriminability: A
physiol 98:1064 1072. CrossRef Medline comparison of information theory and ideal observer analysis. Neural
Panzeri S, Brunel N, Logothetis NK, Kayser C (2010) Sensory neural codes Comput 17:741778. CrossRef Medline
using multiplexed temporal scales. Trends Neurosci 33:111120. Treves A (1997) On the perceptual structure of face space. BioSystems 40:
CrossRef Medline 189 196. CrossRef Medline
Pita-Almenar JD, Ranganathan GN, Koester HJ (2011) Impact of cortical Victor JD, Nirenberg S (2008) Indices for testing neural codes. Neural Com-
plasticity on information signaled by populations of neurons in the cere- put 20:28952936. CrossRef Medline
bral cortex. J Neurophysiol 106:1118 1124. CrossRef Medline Wu S, Nakahara H, Amari S (2001) Population coding with correlation and
Quian Quiroga R, Panzeri S (2009) Extracting information from neuronal an unfaithful model. Neural Comput 13:775797. CrossRef Medline