Professional Documents
Culture Documents
Brain Machine Interfaces: Modeling Strategies For Neural Signal Processing
Brain Machine Interfaces: Modeling Strategies For Neural Signal Processing
INTENT ACTION
Decoding
PERCEPT STIMULUS
Coding
Brain is an extremely
complex system
1012 neurons
1015 synapses
Specific
interconnectivity
Tapping into the Nervous System
http://ida.first.fhg.de/projects/bci/bbci_official/
Choice of Scale for Neuroprosthetics
Bandwidth Localization
(approximate)
Scalp 0 ~ 80 Hz Volume
Electrodes Conduction
Cortical Surface
Moran
Florida Multiscale Signal Acquisition
Develop a experimental paradigm with a nested hierarchy
for studying neural population dynamics.
Least
Invasive
EEG
NRG IRB
Approval for
Human
ECoG Studies
NRG
IACUC
Approval for
Microelectrodes
Animal
Highest Studies
Resolution
Common BMI-BCI Methods
Need to abstract the details of the “wetware” and ask what is the
purpose of the function. Then quantify it in mathematical terms.
Data for these models are rate codes obtained by binning spikes on 100
msec windows.
A linear model estimating hand position at time instance n from the embedded spike
counts can be described as
L 1 M
y xi (n j ) wcji b c
c
i 0 j 1
where yc is the c-coordinate of the estimated hand position by the model, wji is a
weight on the connection from xi(n-j) to yc, and bc is a bias for the c-coordinate.
Linear Model (Wiener-Hopf solution)
In a matrix form, we can rewrite the previous equation as
y WT x
where y is a C-dimensional output vector, and W is a weight matrix of
dimension (LM+1)C. Each column of W consists of [w10c, w11c, w12c…, w1L-
1 , w20 , w21 …, wM0 , …, wML-1 ] .
c c c c c T
x1(n)
z-1
yx(n)
…
… z-1
xM(n) yy(n)
z-1
…
z-1 yz(n)
Linear Model (Wiener-Hopf solution)
For the MIMO case, the weight matrix in the Wiener filter system is estimated
by WWiener R 1P
R is the correlation matrix of neural spike inputs with the dimension of
(LM)(LM), p p
r11 r12 r1M
p
11 1C
r r2 M
p
r22 P
21 2 C
R 21
p M 1 p MC
rM 1 rM 2 rMM
where rij is the LL cross-correlation matrix between neurons i and j (i ≠ j), and
rii is the LL autocorrelation matrix of neuron i.
P is the (LM)C cross-correlation matrix between the neuronal bin count and
hand position, where pic is the cross-correlation vector between neuron i and
the c-coordinate of hand position. The estimated weights WWiener are optimal
based on the assumption that the error is drawn from white Gaussian
distribution and the data are stationary.
Linear Model (Wiener-Hopf solution)
The predictor WWiener minimizes the mean square error (MSE) cost function,
Each sub-block matrix rij can be further decomposed as
2
J E[ e ], e d y
rij (1 L) r (2 L) r (0)
where rij() represents the correlation between neurons i and j with time lag .
Assuming that the random process xi(k) is ergodic for all i, we can utilize the
time average operator to estimate the correlation function. In this case, the
estimate of correlation between two neurons, rij(m-k), can be obtained by
1 N
rij (m k ) E[ xi (m) x j (k )]
N 1 n1
xi (n m)x j (n k )
Linear Model (Wiener-Hopf solution)
The cross-correlation vector pic can be decomposed and estimated in the same way,
substituting xj by the desired signal cj.
1 N
pij ( m k ) E[ xi (m)c j (k )] xi (n m)c j (n k )
N 1 n1
From the equations, it can be seen that rij(m-k) is equal to rji(k-m). Since these two
correlation estimates are positioned at the opposite side of the diagonal entries of R,
the equality leads to a symmetric R.
The symmetric matrix R, then, can be inverted effectively by using the Cholesky
factorization. This factorization reduces the computational complexity for the inverse of
R from O(N3) using Gaussian elimination to O(N2) where N is the number of
parameters.
Optimal Linear Model
d(n)
Recurrent Multilayer Perceptron (RMLP) –
Nonlinear “Black Box”
Spatially recurrent dynamical
systems
Memory is created by feeding
back the states of the hidden PEs.
Feedback allows for continuous
representations on multiple
timescales.
If unfolded into a TDNN it can be
shown to be a universal mapper
in Rn
Trained with backpropagation
through time
y1 (t ) f ( W1x(t ) W f y1 (t 1) b1 )
y 2 (t ) W2 y1 (t ) b 2
Motor Tasks Performed
Data
Task 1
30
M1, PMd, S1, SMA
20
10
-10
and behavior is time
synchronized and
-20
-30
-40
-40 -30 -20 -10 0 10 20 30 40
downsampled to 10Hz
Model Building Techniques
Train the adaptive system with neuronal firing rates
(100 msec) as the input and hand position as the
desired signal.
Training - 20,000 samples (~33 minutes of neuronal
firing)
Freeze weights and present novel neuronal data.
Testing - 3,000 samples – (5 minutes of neuronal
firing)
Results (Belle)
Based on 5 minutes of test data, computed over 4 sec
windows (training on 30 minutes)
y 2 (t ) W2 y1 (t ) b 2
x(t ) x(t ) i 1
Data Analysis : The Effect of Sensitive Neurons on Performance
Primate 1, Session 1 60 60
0.4
40 40
93
0.35
20 20
0.3 0 0
0 20 40 60 0 20 40 60
0.2 19
29
5 Lowest Sensitivity Neurons Movements (hits) of Test Trajectory
0.15 4 1
84
7 60 All Neurons
0.1 26 0.8
45 40
104
Probability
0.6
0.05 20 10 Highest Sensitivity
84 Intermediate Sensitivity
0.4
0 0 10 Low est Sensitivity
0 20 40 60 80 100 120 0.2
Neurons -20
0
0 20 40 60 0 20 40 60 80
3D Error Radius (mm)
Tuning Sensitivity
50
40
30
20
10
-10
-20
-30
0 10 20 30 40 50 60 70
How does each cortical area contribute to the reconstruction of this movement?
Cortical Contributions Belle Day 2
60 60 60 60 60
40 40 40 40 40 Area 1 PP
20 20 20 20 20
0 0 0 0 0 Area 2 M1
-20 -20 -20 -20 -20
0 20 40 0 20 40 0 20 40 0 20 40 0 20 40
Areas 13 Areas 14 Areas 23 Areas 24 Areas 34 Area 3 PMd
60 60 60 60 60
40 40 40 40 40 Area 4 M1 (right)
20 20 20 20 20
0 0 0 0 0
60 60 60 60 60
40 40 40 40 40
20 20 20 20 20
0 0 0 0 0
x1(n)
z-1 w11
y1(n)
//
z-1 w1L c1
y2(n)
c2 dˆ (n)
…
…
xM(n) cM
z -1 wM1
yM(n)
//
z-1 wML
x1
y-(xj==j+yy-x
rrr===y-X
y-X xkjkj)
...
i=0
j y
xj Find
Adjustargmax
Adjust & i |xk iTs.t.
jj s.t. r|
xk k k,
q, |x
|xkqTTr|=|x
r|=|xiTkr| T
r|=|xiTr|
Application to BMI Data – Tracking
Performance
Application to BMI Data – Neuronal
Subset Selection
Hand
Trajectory
Early
(z)
Part
Neuronal
Channel
Index
Late
Part
Generative Models for BMIs
~ ~
Z t H t ( X t , nt )
State Time-series
cont.
model observ.
Zt
~
X t Ft ( X t 1 , vt 1 )
Prediction
P(state|observation)
Updating
Recursive Bayesian approach
State space representation
xt 1 f ( xt ) vt
zt ht (ut , xt ) nt
First equation (system model) defines a first order Markov process.
Second equation (observation model) defines the likelihood of the
observations p(zt|xt) . The problem is completely defined by the prior
distribution p(x0).
Although the posterior distribution p(x0:t|u1:t,z1:t) constitutes the complete
solution, the filtering density p(xt|u1:t, z1:t) is normally used for on-line
problems.
The general solution methodology is to integrate over the unknown variables
(marginalization).
Recursive Bayesian approach
System model p(xt|xt-1) propagates into the future the posterior density
Update
p( zt | xt , ut ) p( xt | x1:t 1 , z1:t 1 )
p( xt | u1:t , z1:t )
p(ut | ut , z1:t 1 )
Uses Bayes rule to update the filtering density. The following equations
are needed in the solution.
For Gaussian noises and linear prediction and observation models, there
is an analytic solution called the Kalman Filter.
Continuous
Linea Observation
r
Kinematic Neuron tuning
State function Firing rate
Linea Gaussian
r
Prediction
P(state|observation)
Updating
Continuous
Exponential
Observation
Linea nonGaussian
r
Prediction
P(state|observation)
Updating
0.5 0 .7
s p ik e
velocity
0 .6
0
0 .5
-0.5
0 .4
-1 0 .3
0 .2
-1.5
0 .1
Decoding
z = H ( xk , n ) Tuning function
k k k
Key Idea: work with the probability of spike firing which is a
continuous random variable
Adaptive algorithm for point processes
Poisson
Model
nonlinear
Point process
Kinematic Neuron tuning
State function spike train
Linea Gaussian
r
Prediction
P(state|observation)
Updating
nonlinear
Point process
nonLinear nonGaussian
Prediction
P(state|observation)
Updating
STEP 1. Preprocessing
1. Generate spike trains from stored spike times 10ms interval, (99.62%
binary train)
2. Synchronize all the kinetics with the spike trains.
3. Assign the kinematic vector x to reconstruct.
X=[position velocity acceleration]’
directions?
D=(max-min)/std (firing rate)
210 330
240 300
270
kinematics Information
neural spikes
direction angle
p( spike | angle)
I ( spike; angle)
angle
p(angle)
spike 0,1
p( spike | angle) log 2 (
p ( spike)
)
p ( spike 1, angle)
p ( spike 1 | angle)
p (angle)
Step 2- Information theoretic Tuning depths for 3
kinds of kinematics (log axis)
Step 2- Tuning Function Estimation
t f (k vt )
Assumption :
spiket Poisson(t )
15
2nd Principal component
10
2nd Principal Component
-5
-10
-15
-20
-25
-30 -20 -10 0 10 20 30
1st Principal Component
p (x k | H k )
p(x k | x k 1 , H k ) p(x k 1 | N k 1 , H k 1 ) dx k 1
0.2
probability
0.15
0.1
0.05
0
-2.5 -2 -1.5 -1 -0.5 0 0.5
velocity
Step 3: Causality concerns
lag
p( spike | KX (lag ))
I ( spike; KX ) (lag )
X
p( KX (lag ))
spike 0,1
p( spike | KX (lag )) log 2 (
p ( spike )
)
Step 3: Information Estimated Delays
I(spk,KX) as function of time delay
2.4
neuron 80
2.2 neuron 72
neuron 99
2 neruon 108
neruon 77
1.8
I(spk,KX) (TimeDelay)
1.6
1.4
1.2
0.8
0.6
0.4
0 50 100 150 200 250 300 350 400 450 500
time delay (ms)
i
it f (k X t )
Prediction
P(state|observation)
N
p ( x0:t | N1(:tj ) ) w k(xi
t 0:t x0i :t ) Updating
i 1
N i
wti wti1 p (N t( j ) | it )
p ( x k | N 1:k ) Wki k ( x k x k )
i 1
Reconstruct the kinematics from neuron spike
trains desired desired
cc exp=0.80243 cc exp=0.97445
Px Py
10 cc MLE=0.67264 40 cc MLE=0.95376
0 20
-10 0
-20 -20
-30 -40
650 700 750 800 650 700 750 800
t t
desired desired
cc exp=0.81539 cc exp=0.91319
Vx Vy
1 cc MLE=0.8151 2 cc MLE=0.91162
0
0
-1
-2 -2
650 700 750 800 650 700 750 800
t t
desired desired
cc exp=0.015071 cc exp=0.7002
Ax Ay
cc MLE=0.040027 cc MLE=0.69188
0.3 0.3
0.2 0.2
0.1 0.1
0 0
-0.1 -0.1
650 700 750 800 650 700 750 800
t t
Results comparison
Table 3-2 Correlation Coefficients between the Desired Kinematics and the
Reconstructions
Position Velocity Acceleration
CC x y x y x y
CC
x y x y x y
[Sanchez, 2004]
Conclusion
Our results and those from other laboratories show it is possible to
extract intent of movement for trajectories from multielectrode array
data.
The current results are very promising, but the setups have limited
difficulty, and the performance seems to have reached a ceiling at an
uncomfortable CC < 0.9
Recently, spike based methods are being developed in the hope of
improving performance. But difficulties in these models are many.
Experimental paradigms to move the field from the present level need
to address issues of:
Training (no desired response in paraplegic)
How to cope with coarse sampling of the neural population
How to include more neurophysiology knowledge in the design