Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 68

Brain Machine Interfaces: Modeling Strategies for

Neural Signal Processing


Jose C. Principe, Ph.D.
Distinguished Professor ECE, BME

Computational NeuroEngineering Laboratory


Electrical and Computer Engineering Department
University of Florida
www.cnel.ufl.edu
principe@cnel.ufl.edu
Brain Machine Interfaces (BMI)

A man made device that either substitutes a


sensory input to the brain, repairs functional
communication between brain regions or
translates intention of movement.
Types of BMIs
Sensory (Input BMI): Providing sensory input to form percepts when
natural systems are damaged.
Ex: Visual, Auditory Prosthesis

Motor (Output BMI): Converting motor intent to a command output


(physical device, damaged limbs)
Ex: Prosthetic Arm Control

Cognitive BMI: Interpret internal neuronal state to deliever feedback to


the neural population.
Ex: Epilepsy, DBS Prosthesis

Computational Neuroscience and Technology developments are


playing a larger role in the development of each of these
areas.
General Architecture
BCI (BMI) bypasses the brain’s normal pathways of peripheral nerves (and muscles)

J.R. Wolpaw et al. 2002


The Fundamental Concept
BRAIN MACHINE

INTENT ACTION
Decoding

PERCEPT STIMULUS
Coding

Neural Interface Physical Interface

Need to understand how brain processes information.


Stimulus Neural Response
Coding Given To be inferred
Decoding To be inferred Given
Levels of Abstraction for Neurotechnology

Brain is an extremely
complex system
1012 neurons
1015 synapses
Specific
interconnectivity
Tapping into the Nervous System

The choice and availability of brain signals and


recording methods can greatly influence the ultimate
performance of the BMI.

The level of BMI performance may be attributed to


selection of electrode technology, choice of model, and
methods for extracting rate, frequency, or timing codes.
Coarse(mm)

http://ida.first.fhg.de/projects/bci/bbci_official/
Choice of Scale for Neuroprosthetics

Bandwidth Localization
(approximate)

Scalp 0 ~ 80 Hz Volume
Electrodes Conduction
Cortical Surface

Electro- 0 ~ 500Hz Cortical Surface


corticogram
(ECoG)

Implanted 0 ~ 7kHz Single Neuron


Electrodes
Spatial Resolution of Recordings

Moran
Florida Multiscale Signal Acquisition
Develop a experimental paradigm with a nested hierarchy
for studying neural population dynamics.
Least
Invasive
EEG

NRG IRB
Approval for
Human
ECoG Studies

NRG
IACUC
Approval for
Microelectrodes
Animal
Highest Studies
Resolution
Common BMI-BCI Methods

BMIs --- Invasive, work with intention of movement


• Spike trains, field potentials, ECoG
• Very specific, potentially better performance

BCIs --- Noninvasive, subjects must learn how to control their


brain activity
• EEG
• Very small bandwidth
Computational NeuroScience
Integration of probabilistic models of information processing with
the neurophysiological reality of brain anatomy, physiology and
purpose.

Need to abstract the details of the “wetware” and ask what is the
purpose of the function. Then quantify it in mathematical terms.

Difficult but very promising. One issue is that biological evolution is


a legacy system!

BMI research is an example of a computational neuroscience


approach.
How to put it together?

NeoCortical Brain Areas Related to Movement


Posterior Parietal (PP) –
Visual to motor
transformation

Premotor (PM) and Dorsal


Premotor (PMD) -
Planning and guidance
(visual inputs)

Primary Motor (M1) –


Initiates muscle contraction
Ensemble Correlations – Local in Time – are Averaged with
Global Models
Computational Models of Neural Intent

Two different levels of neurophysiology realism

Black Box models – no realism, function relation between


input desired response

Generative Models – minimal realism, state space models


using neuroscience elements
Signal Processing Approaches with Black
Box Modeling

Accessing 2 types of signals (cortical activity and behavior) leads us to a


general class of I/O models.

Data for these models are rate codes obtained by binning spikes on 100
msec windows.

Optimal FIR Filter – linear, feedforward


TDNN – nonlinear, feedforward
Multiple FIR filters – mixture of experts
RMLP – nonlinear, dynamic
Linear Model (Wiener-Hopf solution)
Consider a set of spike counts from M neurons, and a hand position vector dC (C
is the output dimension, C = 2 or 3). The spike count of each neuron is
embedded by an L-tap discrete time-delay line. Then, the input vector for a
linear model at a given time instance n is composed as x(n) = [x1(n), x1(n-1) …
x1(n-L+1), x2(n) … xM(n-L+1)]T, xLM, where xi(n-j) denotes the spike count of
neuron i at a time instance n-j.

A linear model estimating hand position at time instance n from the embedded spike
counts can be described as
L 1 M
y   xi (n  j ) wcji  b c
c

i 0 j 1

where yc is the c-coordinate of the estimated hand position by the model, wji is a
weight on the connection from xi(n-j) to yc, and bc is a bias for the c-coordinate.
Linear Model (Wiener-Hopf solution)
In a matrix form, we can rewrite the previous equation as
y  WT x
where y is a C-dimensional output vector, and W is a weight matrix of
dimension (LM+1)C. Each column of W consists of [w10c, w11c, w12c…, w1L-
1 , w20 , w21 …, wM0 , …, wML-1 ] .
c c c c c T

x1(n)
z-1
 yx(n)

… z-1

xM(n)  yy(n)
z-1


z-1  yz(n)
Linear Model (Wiener-Hopf solution)
For the MIMO case, the weight matrix in the Wiener filter system is estimated
by WWiener  R 1P
R is the correlation matrix of neural spike inputs with the dimension of
(LM)(LM), p  p 
 r11 r12  r1M 
p
11 1C

r  r2 M 
 p
r22 P 
21 2 C
R   21     
       
  p M 1  p MC 
rM 1 rM 2  rMM 
where rij is the LL cross-correlation matrix between neurons i and j (i ≠ j), and
rii is the LL autocorrelation matrix of neuron i.

P is the (LM)C cross-correlation matrix between the neuronal bin count and
hand position, where pic is the cross-correlation vector between neuron i and
the c-coordinate of hand position. The estimated weights WWiener are optimal
based on the assumption that the error is drawn from white Gaussian
distribution and the data are stationary.
Linear Model (Wiener-Hopf solution)
The predictor WWiener minimizes the mean square error (MSE) cost function,
Each sub-block matrix rij can be further decomposed as
2
J  E[ e ], e  d  y

 rij (0) rij (1)  rij ( L  1) 


 r (1) rij (0)  rij ( L  2)
rij  
ij

    
 
rij (1  L) r (2  L)  r (0) 

where rij() represents the correlation between neurons i and j with time lag .
Assuming that the random process xi(k) is ergodic for all i, we can utilize the
time average operator to estimate the correlation function. In this case, the
estimate of correlation between two neurons, rij(m-k), can be obtained by

1 N
rij (m  k )  E[ xi (m) x j (k )]  
N  1 n1
xi (n  m)x j (n  k )
Linear Model (Wiener-Hopf solution)

The cross-correlation vector pic can be decomposed and estimated in the same way,
substituting xj by the desired signal cj.
1 N
pij ( m  k )  E[ xi (m)c j (k )]   xi (n  m)c j (n  k )
N  1 n1

From the equations, it can be seen that rij(m-k) is equal to rji(k-m). Since these two
correlation estimates are positioned at the opposite side of the diagonal entries of R,
the equality leads to a symmetric R.

The symmetric matrix R, then, can be inverted effectively by using the Cholesky
factorization. This factorization reduces the computational complexity for the inverse of
R from O(N3) using Gaussian elimination to O(N2) where N is the number of
parameters.
Optimal Linear Model

Normalized LMS with weight


decay is a simple starting point.

w(n  1)  w(n)  2
e( n) x ( n)
x ( n)  

Four multiplies, one divide and two


adds per weight update
Ten tap embedding with 105
neurons
For 1-D topology contains 1,050
parameters (3,150)

Alternatively, the Wiener solution


w  ( R  I ) 1 p
Time-Delay Neural Network (TDNN)

The first layer is a bank of linear


filters followed by a nonlinearity.
The number of delays to span I
second
y(n)= Σ wf(Σwx(n))
Trained with backpropagation
Topology contains a ten tap
embedding and five hidden
PEs– 5,255 weights (1-D)
Principe, UF
Multiple Switching Local Models

Multiple adaptive filters that compete to win the modeling of a signal


segment.
Structure is trained all together with normalized LMS/weight decay
Needs to be adapted for input-output modeling.
We selected 10 FIR experts of order 10 (105 input channels)

d(n)
Recurrent Multilayer Perceptron (RMLP) –
Nonlinear “Black Box”
Spatially recurrent dynamical
systems
Memory is created by feeding
back the states of the hidden PEs.
Feedback allows for continuous
representations on multiple
timescales.
If unfolded into a TDNN it can be
shown to be a universal mapper
in Rn
Trained with backpropagation
through time

y1 (t )  f ( W1x(t )  W f y1 (t  1)  b1 )

y 2 (t )  W2 y1 (t )  b 2
Motor Tasks Performed

Data
Task 1

• 2 Owl monkeys – Belle,


Carmen
• 2 Rhesus monkeys –
Aurora, Ivy
• 54-192 sorted cells
• Cortices sampled: PP,
40

30
M1, PMd, S1, SMA
20

• Neuronal activity rate


Task 2

10

-10
and behavior is time
synchronized and
-20

-30

-40
-40 -30 -20 -10 0 10 20 30 40

downsampled to 10Hz
Model Building Techniques
Train the adaptive system with neuronal firing rates
(100 msec) as the input and hand position as the
desired signal.
Training - 20,000 samples (~33 minutes of neuronal
firing)
Freeze weights and present novel neuronal data.
Testing - 3,000 samples – (5 minutes of neuronal
firing)
Results (Belle)
Based on 5 minutes of test data, computed over 4 sec
windows (training on 30 minutes)

  Signal to error ratio (dB) Correlation Coefficient

  (average) (max) (average) (max)

LMS 0.8706 7.5097 0.6373 0.9528

Kalman 0.8987 8.8942 0.6137 0.9442

TDNN 1.1270 3.6090 0.4723 0.8525

  Local Linear 1.4489 23.0830 0.7443 0.9748

RNN 1.6101 32.3934 0.6483 0.9852


Physiologic Interpretation

When the fitting error is above chance, a sensitivity analysis can


be performed by computing the Jacobian of the output vector
with respect to each neuronal input i

This calculation indicates which inputs (neurons) are most


important for modulating the output/trajectory of the model.
Computing Sensitivities Through the
Models
Identify the neurons that affect the output the most.

Feedforward Linear Eq. Feedforward RMLP Eqs.


y (t )  Wx(t ) y1 (t )  f ( W1x(t )  W f y1 (t  1)  b1 )

y 2 (t )  W2 y1 (t )  b 2

General form of Linear General form of RMLP


Sensitivity Sensitivity
y (t ) y 2 (t )   
W  W2 Dt   WTf Dt  i  W1T
T

x(t ) x(t   )  i 1 
Data Analysis : The Effect of Sensitive Neurons on Performance

Hightest Sensitivity Neurons Middle Sensitivity Neurons

Primate 1, Session 1 60 60
0.4
40 40
93
0.35
20 20

0.3 0 0

0.25 -20 -20


Sensitivity

0 20 40 60 0 20 40 60
0.2 19
29
5 Lowest Sensitivity Neurons Movements (hits) of Test Trajectory
0.15 4 1
84
7 60 All Neurons
0.1 26 0.8
45 40
104

Probability
0.6
0.05 20 10 Highest Sensitivity
84 Intermediate Sensitivity
0.4
0 0 10 Low est Sensitivity
0 20 40 60 80 100 120 0.2
Neurons -20
0
0 20 40 60 0 20 40 60 80
3D Error Radius (mm)

Decay trend appears in all


animals and behavioral
paradigms
Directional Tuning vs. Sensitivity of
ranked cells

Tuning Sensitivity

Significance: Sensitivity analysis through trained models automatically


delivers deeply tuned cells that span the space.
Reaching Movement Segmentation
70
Rest to Food Food to Mouth Mouth to Rest X
Y
60 Z

50

40

30

20

10

-10

-20

-30
0 10 20 30 40 50 60 70

How does each cortical area contribute to the reconstruction of this movement?
Cortical Contributions Belle Day 2

Area 1 Area 2 Area 3 Area 4 Areas 12

60 60 60 60 60

40 40 40 40 40 Area 1 PP
20 20 20 20 20

0 0 0 0 0 Area 2 M1
-20 -20 -20 -20 -20
0 20 40 0 20 40 0 20 40 0 20 40 0 20 40
Areas 13 Areas 14 Areas 23 Areas 24 Areas 34 Area 3 PMd
60 60 60 60 60

40 40 40 40 40 Area 4 M1 (right)
20 20 20 20 20

0 0 0 0 0

-20 -20 -20 -20 -20


0 20 40 0 20 40 0 20 40 0 20 40 0 20 40
Areas 123 Areas 124 Areas 134 Areas 234 Areas 1234

60 60 60 60 60

40 40 40 40 40

20 20 20 20 20

0 0 0 0 0

-20 -20 -20 -20 -20


0 20 40 0 20 40 0 20 40 0 20 40 0 20 40

Train 15 separate RMLPs with every combination of cortical input.


Is there enough information in spike
trains for modeling movement?
Analysis is based on the time embedded model
Correlation with desired is based on a linear filter output for
each neuron
Utilize a non-stationary tracking algorithm
Parameters are updated by LMS
Build a spatial filter
Adaptive in real time
Sparse structure based on regularization for enables
selection
Architecture

x1(n)
z-1 w11
 y1(n)
//
z-1 w1L c1
y2(n)
c2 dˆ (n)


xM(n) cM
z -1 wM1
 yM(n)
//

z-1 wML

Adapted by LMS Adapted by on-line LAR


(Kim et. al., MLSP, 2004)
Training Algorithms

Tap weights for every time lag is updated by LMS


wij (n  1)  wij (n)  2e(n) xij (n)
Then, the spatial filter coefficients are obtained by on-line version of
least angle regression (LAR) (Efron et. al. 2004)

x1
y-(xj==j+yy-x
rrr===y-X
y-X xkjkj)
...

i=0
j y
xj Find
Adjustargmax
Adjust & i |xk iTs.t.
jj s.t. r|
xk k k,
q, |x
|xkqTTr|=|x
r|=|xiTkr| T
r|=|xiTr|
Application to BMI Data – Tracking
Performance
Application to BMI Data – Neuronal
Subset Selection

Hand
Trajectory
Early
(z)
Part

Neuronal
Channel
Index

Late
Part
Generative Models for BMIs

Use partial information about the physiological system, normally


in the form of states.

They can be either applied to binned data or to spike trains


directly.

Here we will only cover the spike train implementations.

Difficulty of spike train Analysis:


Spike trains are point processes, i.e. all the information is
contained in the timing of events, not in the amplitude fo the
signals!
Goal
Build an adaptive signal processing framework for
BMI decoding in the spike domain.

Features of Spike domain analysis


Binning window size is not a concern
Preserve the randomness of the neuron behavior.
Provide more understanding of neuron physiology (tuning) and
interactions at the cell assembly level
Infer kinematics online
Deal with nonstationary
More computation with millisecond time resolution
Recursive Bayesian Approach

~ ~
Z t  H t ( X t , nt )

State Time-series
cont.
model observ.
Zt
~
X t  Ft ( X t 1 , vt 1 )
Prediction
P(state|observation)

Updating
Recursive Bayesian approach
State space representation

 xt 1  f ( xt )  vt

 zt  ht (ut , xt )  nt
First equation (system model) defines a first order Markov process.
Second equation (observation model) defines the likelihood of the
observations p(zt|xt) . The problem is completely defined by the prior
distribution p(x0).
Although the posterior distribution p(x0:t|u1:t,z1:t) constitutes the complete
solution, the filtering density p(xt|u1:t, z1:t) is normally used for on-line
problems.
The general solution methodology is to integrate over the unknown variables
(marginalization).
Recursive Bayesian approach

There are two stages to update the filtering density:


Prediction (Chapman Kolmogorov)
p ( xt | u1:t 1 , z1:t 1 )   p ( xt | xt 1 ) p ( xt 1 | u1:t 1 , z1:t 1 )dxt 1

System model p(xt|xt-1) propagates into the future the posterior density
Update
p( zt | xt , ut ) p( xt | x1:t 1 , z1:t 1 )
p( xt | u1:t , z1:t ) 
p(ut | ut , z1:t 1 )
Uses Bayes rule to update the filtering density. The following equations
are needed in the solution.

p ( xt | xt 1 )   p ( xt | vt 1 , xt 1 ) p(vt 1 | xt 1 )dvt 1   ( xt  vt 1  xt 1 ) p (vt 1 )dvt 1

p( zt | xt , ut )   ( zt  h(ut , xt )  nt ) p(nt )dnt

p( zt | z1:t 1 , ut )   p ( zt | xt , ut ) p( xt | z1:t 1 , u1:t 1 )dxt


Kalman filter for BMI decoding

For Gaussian noises and linear prediction and observation models, there
is an analytic solution called the Kalman Filter.
Continuous
Linea Observation
r
Kinematic Neuron tuning
State function Firing rate

Linea Gaussian
r
Prediction
P(state|observation)

Updating

[Wu et al. 2006]


Particle Filter for BMI decoding
In general the integrals need to be approximated by sums using
Monte Carlo integration with a set of samples drawn from the
posterior distribution of the model parameters.

Continuous
Exponential
Observation

Kinematic Neuron tuning


State function Firing rate

Linea nonGaussian
r
Prediction
P(state|observation)

Updating

[Brockwell et al. 2004]


State estimation framework for BMI decoding in
spike domain
1
1.5
0 .9
1
0 .8

0.5 0 .7

s p ik e
velocity

0 .6
0
0 .5
-0.5
0 .4

-1 0 .3

0 .2
-1.5
0 .1

0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 0


0 0.2 0.4 0.6 0. 8 1 1.2 1.4 1.6 1. 8 2
time (ms ) 5 time
x
5
10
x 10

Kinematics Neural Tuning Multi-spike trains


state function observation

Decoding

xk = Fk ( xk-1 , vk-1) Kinematic dynamic model

z = H ( xk , n ) Tuning function
k k k
Key Idea: work with the probability of spike firing which is a
continuous random variable
Adaptive algorithm for point processes

Poisson
Model
nonlinear
Point process
Kinematic Neuron tuning
State function spike train

Linea Gaussian
r
Prediction
P(state|observation)

Updating

[Brown et al. 2001]


Monte Carlo Sequential estimation for point process

nonlinear
Point process

Kinematic Neuron tuning


State function spike train

nonLinear nonGaussian

Prediction
P(state|observation)

Updating

Sequential Estimate PDF


[Wang et al. 2006]
Monte Carlo sequential estimation framework for
BMI decoding in spike domain

STEP 1. Preprocessing
1. Generate spike trains from stored spike times 10ms interval, (99.62%
binary train)
2. Synchronize all the kinetics with the spike trains.
3. Assign the kinematic vector x to reconstruct.
X=[position velocity acceleration]’

(more information, instantaneous state avoid error accumulation,


less computation)
STEP 2- Neural tuning analysis

kinematics Neural spike trains

Encoding neuron No. 72 TuningDepth: 1


90
(Tuning) 120
0.25
60
0.2
A example of a tuned neuron
0.15
150 30
circular mean  arg( rN e i N ) 0.1

Metric: Tuning depth: N 0.05

how differently does a neuron fire across 180 0

directions?
D=(max-min)/std (firing rate)
210 330

240 300
270

Neuron 72: Tuning Depth 1


Step 2- Information Theoretic Metric of Tuning

kinematics Information
neural spikes
direction angle

p( spike | angle)
I ( spike; angle)  
angle
p(angle) 
spike  0,1
p( spike | angle) log 2 (
p ( spike)
)

p ( spike  1, angle)
p ( spike  1 | angle) 
p (angle)
Step 2- Information theoretic Tuning depths for 3
kinds of kinematics (log axis)
Step 2- Tuning Function Estimation

Neural firing Model


velocity spikes
Linear nonlinear f Poisson
filter model

t  f (k  vt )
Assumption :
spiket  Poisson(t )

generation of the spikes depends only on the kinematic


vector we choose.
Step 2- Linear Filter Estimation
Spike Triggered Average (STA)
T
k  ( E[v v]  I ) 1 Ev|spike [v]
Geometry interpretation
neuron 72: VpS PCA
25
Vp
VpS
20

15
2nd Principal component

10
2nd Principal Component

-5

-10

-15

-20

-25
-30 -20 -10 0 10 20 30
1st Principal Component

1st Principal component


Step 2- Nonlinear f estimation
Step 2- Diversity of neural nonlinear properties

Ref: Paradoxical cold


[Hensel et al. 1959]
Step 2- Estimated firing probability and
generated spikes
Step 3: Sequential Estimation Algorithm for
Point Process Filtering

Consider the neuron as an inhomogenous Poisson point process


 (t k )  exp{  k k vk }
Observing N(t) spikes in an interval T, the posterior of the spike
model is Pr( N (t  t )  N (t )  1 | x(t ), θ(t ), H (t ))
 (t | x(t ), θ(t ), H (t ))  lim
t  0 t

The probability of observing an event in t is


P(N k | x k , H k )  ( (t k | x k , H k )t ) N k exp( (t k | x k , H k )t )
And the one step prediction density (Chapman-Kolmogorov)

p (x k | H k ) 
 p(x k | x k 1 , H k ) p(x k 1 | N k 1 , H k 1 ) dx k 1

The posterior of the state vector, given an observation N


P ( N k | x k , H k ) p ( x k | H k )
p (x k | N k , H k ) 
p ( N k | H k )
N
{x i0:k , wki }i S1

Step 3: Sequential Estimation Algorithm for


Point Process Filtering
N
Monte Carlo Methods are used to estimate the integral. Let {x i0:k , w ki } i S1

represent a random measure on the posterior density, and represent


the proposed density by q( x 0:k | N 1:k )

The posterior density can then be approximated by


N
p ( x0:t | N1:t )   wti k ( x0:t  x0i :t ,  )
i 1
Generating samples from q( x 0:k | N 1:k ) using the principle of Importance
sampling
p (x i0:k | N1:k ) p (N k | x ik ) p(x ik | x ik 1 )
w 
i
k  wk 1
i

q(x i0:k | N1:k ) q (x ik | x ik 1 , N k )


By MLE we can find the maximum or use direct estimation with kernels
of mean and variance NS
NS ~ ~

~
xk  
i 1
p( N k | x ik )  x ik Vk 
i 1
p (N k | x ik )  (  (x ik  x k )(x ik  x k )T )
Posterior density at pdfaattime index
time index 45.092s
0.35
posterior density
desired velocity
0.3 velocity by seq. estimation (collapse)
velocity by seq. estimation (MLE)
velocity by adaptive filtering
0.25

0.2
probability

0.15

0.1

0.05

0
-2.5 -2 -1.5 -1 -0.5 0 0.5
velocity
Step 3: Causality concerns

lag

p( spike | KX (lag ))
I ( spike; KX ) (lag )  
X
p( KX (lag )) 
spike  0,1
p( spike | KX (lag )) log 2 (
p ( spike )
)
Step 3: Information Estimated Delays
I(spk,KX) as function of time delay
2.4
neuron 80
2.2 neuron 72
neuron 99
2 neruon 108
neruon 77
1.8
I(spk,KX) (TimeDelay)

1.6

1.4

1.2

0.8

0.6

0.4
0 50 100 150 200 250 300 350 400 450 500
time delay (ms)

Figure 3-14 Mutual information as function of time delay for 5 neurons.

For 185 neurons, average delay is 220.108 ms


Step 4:
Monte Carlo sequential kinematics estimation

i
it  f (k  X t )

Kinematic Neural Tuning


spike trains
State function
N t( j )

X t i  Ft X t 1i  vt 1i NonGaussian

Prediction
P(state|observation)
N
p ( x0:t | N1(:tj ) )   w k(xi
t 0:t  x0i :t ) Updating
i 1

N i
wti  wti1 p (N t( j ) | it )
p ( x k | N 1:k )   Wki  k ( x k  x k )
i 1
Reconstruct the kinematics from neuron spike
trains desired desired
cc exp=0.80243 cc exp=0.97445
Px Py
10 cc MLE=0.67264 40 cc MLE=0.95376

0 20

-10 0

-20 -20

-30 -40
650 700 750 800 650 700 750 800
t t
desired desired
cc exp=0.81539 cc exp=0.91319
Vx Vy
1 cc MLE=0.8151 2 cc MLE=0.91162

0
0
-1

-2 -2
650 700 750 800 650 700 750 800
t t
desired desired
cc exp=0.015071 cc exp=0.7002
Ax Ay
cc MLE=0.040027 cc MLE=0.69188
0.3 0.3

0.2 0.2

0.1 0.1

0 0

-0.1 -0.1
650 700 750 800 650 700 750 800
t t
Results comparison
Table 3-2 Correlation Coefficients between the Desired Kinematics and the
Reconstructions
Position Velocity Acceleration

CC x y x y x y

Expectation 0.8161 0.8730 0.7856 0.8133 0.5066 0.4851


MLE 0.7750 0.8512 0.7707 0.7901 0.4795 0.4775

Table 3-3 Correlation Coefficient Evaluated by the Sliding Window


Position Velocity Acceleration

CC
x y x y x y

 0.84010 0.8945 0.7944 0.8142 0.5256 0.4460


Expectation
0.0738  0.0477  0.0578  0.0658  0.0658  0.1495

0.7984 0.8721 0.7805 0.7918 0.4950 0.4471


MLE
 0.0963  0.0675  0.0491  0.0710  0.0430  0.1399

[Sanchez, 2004]
Conclusion
Our results and those from other laboratories show it is possible to
extract intent of movement for trajectories from multielectrode array
data.
The current results are very promising, but the setups have limited
difficulty, and the performance seems to have reached a ceiling at an
uncomfortable CC < 0.9
Recently, spike based methods are being developed in the hope of
improving performance. But difficulties in these models are many.
Experimental paradigms to move the field from the present level need
to address issues of:
Training (no desired response in paraplegic)
How to cope with coarse sampling of the neural population
How to include more neurophysiology knowledge in the design

You might also like