Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 23

A literature Survey on Entropy

Measures

Assignment IV Report

University of Manitoba
Faculty of
Engineering
Department of Mechanical and Manufacturing Engineering

MECG 7930 Advanced Nonlinear Systems Analysis

April, 2015 (Winter 2015)

Contents
CHAPTER 1 DYNAMICAL SYSTEMS, SHORT-NOISY DATA SAMPLES AND ENTROPY
MEASUREMENTS......................................................................................................... 2
1.1 Dynamical Systems........................................................................................... 2
1.2 Shannons Entropy............................................................................................ 2
1.3 Concept of Entropy in Dynamical Systems........................................................3
1.4 Entropy in Time Measured Signals.....................................................................4
1.5 Short and Noisy Time Series..............................................................................5
CHAPTER 2- GRAHAMS ENTROPIES............................................................................8
2.1 Coarse Time series Quantization and Vector Identifiers....................................8
2.2 Quantized Dynamical Entropy...........................................................................9
2.3 Quantized Approximation of Sample Entropy..................................................10
CHAPTER 3 - ENTROPY FEATURES ANALYSIS AND COMPARISON...............................13
3.1 Dependence on Data Length...........................................................................13
3.2 Robustness to Observational Noise.................................................................14
3.3 Computational Efficiency.................................................................................16
CHAPTER 4 CONCLUSIONS..................................................................................... 17
References................................................................................................................ 18

Page | 1

CHAPTER 1 DYNAMICAL SYSTEMS, SHORT-NOISY DATA


SAMPLES AND ENTROPY MEASUREMENTS

1.1 Dynamical Systems


A dynamical system is a collection of elements that interact with the
evolution of time, and can be observed through measurement. A system has
many states and all states are represented in the state space of the system.
A path in the state space describes the dynamics of the dynamical system.
Generally, studying a dynamical system in the short term is somehow a good
review of what happens in it. However, when analyzed in long term,
dynamical systems are hard to predict.
A dynamical system can be deterministic or stochastic. A deterministic
system can be described by a set of differential (or integral) equations- in
case of continuous system- with no reference to chance. In other words,
these set of equations can describe how the states of the system change
with time. On the other hand, stochastic dynamical systems are completely
random processes where the occurrence of current event has no influence on
the next. As an example for deterministic dynamical system, one can name
one link pendulum where knowing the position and velocity of the initial
point, all the future points are predictable. Furthermore, in case of having a
double pendulum, it might enter a chaotic regime which will lessen the
horizon of predictability of that specific system. Throwing a dice is a good
example of a stochastic process, where the last toss of dice has no influence
on the next.
In order to understand a dynamical system, one should know the level of
unpredictability. There are several mathematical and statistical methods to
do so. Different fractal dimensions, from topological to information
dimensions have been used to determine the uncertainty of a system. One of
the newly measures which is evolved from communication field to
mathematically described dynamical systems and then modified to be
implemented in time measured signals is entropy measures, which in this
literature survey will be studied thoroughly.

1.2 Shannons Entropy


In 1948, Shannon (aka the father of information theory) published an article
titled A mathematical theory of communication to address the rate of
Page | 2

information produced by each character in the field of communication for


language based systems [1]. A higher value of Shannons entropy is a
representative of new information and the lower values happens when the
new character is just confirming the information we already knew.
If there be a set of possible events with probabilities of occurrences of
p1 , p2 , , pn
, where n is the number of events, a possible measure of the
amount of uncertainty should have three properties. First, it should be
pi
continuous in the
; Second, if all events have the same probability of
occurrence, then it should be a monotonic increasing function of n; and third,
in case of having a choice be broken down into two successive choices, then
it should be weighted sum of the individual values of each of them. Then
only equation satisfying these three conditions is of the form
n

H= pi log p i

(11)

i=1

where H is called Shannon entropy. This value is a measure of rate of


information produced or in other words, its the amount of uncertainty of a
system.
The Shannon entropy also known as statistical entropy is a positive function
p1 , p2 , , pn
of
and when the probability of one event is 1, the result would
be zero. Furthermore Shannon entropy was introduced in the field of
communication and has nothing to do with measuring of unpredictability of
dynamical systems.

1.3 Concept of Entropy in Dynamical Systems


In late 1950s, a Russian mathematician called Kolmogorov came up with the
idea of generalizing Shannon entropy to measure unpredictability of a
dynamical system. He introduced the idea in the Kolmogorov Moscow
seminar, where Sinai came up with a mathematical foundation for
quantifying the complexity of a given measure-preserving dynamical system.
This generalized entropy measure which is known as Kolmogorov-Sinai
entropy (KS-entropy) as well as Kolmogorov entropy and metric entropy is an
invariant property of the dynamical system.
Page | 3

Consider a dynamical system with F-dimensional degrees of freedom 1 where


F

the F-dimensional phase space being partitioned to boxes of size

Suppose that there is an attractor 2 in phase space and that the trajectory

x ( t )

is in the basin of attraction3. If the state of the system being measured at


p(i 1 , i 2 , .. ,i d ) is the joint probability where
time intervals of , then
x ( t=d )

is in box

id

. At this point, KS-entropy can be defined in the form

of:
K=lim lim lim

0 0 d

1
p ( i 1 , i 2 , .. ,i d ) ln p (i 1 , i 2 , .. ,i d )
d i ,
i , .. ,i
1

(12)

The Calculated value of K is a good representative of the property of a given


dynamical system [2]. If K=0 , the system is completely ordered and by
knowing the initial condition the future states of the dynamical system is
absolutely predictable. K is infinite in random or stochastic system, where
occurrence of one event does not reveal anything about the successive
event. And if K is constant ( 0 ) then its a chaotic (deterministic)
system. In other words, the higher unpredictable a system is, the higher KSentropy is. Furthermore, KS-entropy apart from being a rich generalization of
Shannon entropy has some correlation with entropy in thermodynamics
where disorder increases entropy.

1 Number of independent variable required to describe the instantaneous state of


the system.
2 A set of numerical values toward which a system tends to evolve, for a wide
variety of staring conditions of the system.
3 A set of initial conditions leading to the same asymptotic behaviour of the
trajectory.
Page | 4

1.4 Entropy in Time Measured Signals


As is well known that KS-entropy is an invariant property of a given
dynamical system and it can be a sound representative of how ordered a
system is, its very difficult to determine K directly from a measured time
signal. In other words, if the descriptive mathematical equation of a
dynamical system be available, the KS-entropy is a strong tool to measure
the level of the complexity in that system. But in case of having measured
time signal, calculating lim is a major drawback since the number of
sampling points are limited.
Grassberger et.al. modified the KS-entropy using order- q

Reyni entropies

in the form of [2]:


K q=lim lim lim

0 0 d

1 1
ln
pq ( i 1 ,i 2 ,.. , i d )
d q1 i ,
i , .. ,i

They found out that of all order- q

(13)

quantities

Kq

K2

is more practical

due to its ease of calculation from a measured time series. Generically, the
d
whole trajectory can be reconstructed from
measurements where
d F

of any single coordinate. Taking any coordinate and denoting it by

X , we consider then:
lim 1

N
C d )= N 2 [number of pairs ( n , m ) with

Then

K 2,d

can be calculated using term


C ( )
1
K 2,d = ln d
C d +1 ( )

(|
i=1

1
2 2

X n+i X m+i| < ]

C d ( )

(14)

as follows:
(15)

and,

Page | 5

K 2 lim lim K 2, d

(16)

0 d

As it is shown, a very good lower bound on KS-entropy can be calculated


using Grassberger et.al. modification from experimental time series.

1.5 Short and Noisy Time Series


Most of the biological signals such heart pressure, EEG, ECG are short in
the number of sampling points and are contaminated hugely by
environmental noise. As a consequence, KS-entropy and the modified version
using order- q Reyni family are not applicable. In order to overcome this,
Pincus [3] modified KS-entropy using the Eckmann and Ruelle previous
research [4].
Consider a short time series as

{u(1) ,u (2) , , u(n)}

obtained from a

biological signal. Take two parameters m and r, where m is the embedding


dimension and r is the tolerance in which two vectors are similar. Then build
a template vector x ( i ) which is as follows:
x ( i )=[ u ( i ) ,u ( i+ 1 ) , , u ( i+ m1 ) ]

(17)

Comparing two successive vectors seeking for self-similarity can be achieved


by calculating the Chebyshev distance of two specific vectors as follows:
Xi ,
of j such d
C mi (r )=

where

Xi ,
d

X j

X j r /( N m+ 1)

is the Chebyshev distance of two vectors of

(18)

Xi ,

Xj

and can be calculated as follows:


Xi ,
d

X j = max (|x ( i+k 1 )x ( j+k 1 )|)


k=1,2, ,m

(19)

Page | 6

Next define
Nm+1

( r )=( Nm+1)

i=1

ln Cmi ( r)

(110)

and define

ApEn ( m, r , N )=m ( r ) m+1 ( r )

Given N points, the family of statistics

(111)

ApEn (m , r , N )

is approximately equal

to the negative average natural logarithm of the conditional probability that


two sequences that are similar for m
points remain similar within a
tolerance r at the next point. Equation

(18)

measures the regularity, or

frequency of occurrence of similar patterns at a tolerance


(111)

r , and equation

represents the average stability of those similar patterns upon

incrementing.
As

ApEn

is fully dependent on the value of fixed m and r, its no more an

invariant property of dynamical system and is a statistical measurement


which is recommended to be implemented in conjunction with other
m
m
statistics. For specific m in the ApEn calculation, at least 10 20
data
points are required to analyze.
The utmost drawback of
avoid

ln(0)

in equation

ApEn

is being biased cause by self-matching to

(110) . This problem makes

ApEn

measure to

be heavily dependent on the record length where for short time series, more
similarity than is present is observed. Despite this inefficiency, ApEn is
widely applied in cardiovascular studies.

Page | 7

ApEn

Moorman et.al. [5] modified

to avoid self-matching at the

preliminary levels of calculation. As in ApEn, consider a time series of N


points

u ( 1 ) , u ( 2 ) , ,u ( N ) {u ( j ) : 1 j N }

and define vectors as

X m ( i )={ u ( i+k ) :0 k m1 } , 1 i N m+1

(112)

and use Chebyshev distance to measure the persistence of time series as

d |X m ( i )X m (l)|=max {|u ( i+ k )u ( l+k ) :0 k m1|}

(113)

next define

of d| xm ( i )x m ( j )|, where j=1 : Nmi j


1
)
Bmi ( r )=

Nm1

(114)

of d| xm +1 (i )x m +1 ( j )| , where j=1: N mi j
1
)
A mi ( r )=

Nm1

(115)

and then using

B mi ( r ) and

A mi ( r )

B ( r )=

A m ( r )=

to define

1
N m

Nm

1
Nm

N m

Bmi ( r )

(116)

i=1

A mi ( r )

(117)

i=1

As its obvious that up to this level, there is no need for self-vector-comparing as


there is no issue with producing

0 results.

Bm (r )

sequences will match for m points, whereas


sequences will match for

is then the probability that two

Am (r )

is the probability that two

m+ 1 points.
Page | 8

Then define parameter

SampEn ( m , r )= lim ln
N

SampEn (m ,r )

as

Am(r )
Bm ( r )

(118)

which can be estimated by the family of statistics as

Am ( r )
SampEn ( m , r , N ) =ln m
B (r )

where

m ,

r , and

(119)

are the embedding dimension (length of template

vector), tolerance, and length of time series , respectively.

SampEn (m ,r )

will be defined except in two situations

1-

B=0 , in which no regularity has been detected.

2-

A=0 , which corresponds to a conditional probability of 0 , and an infinite


value of

SampEn (m ,r ) .

Page | 9

CHAPTER 2- GRAHAMS ENTROPIES


The characterization of complexity of real world systems using their time-series
data is always being a problem due to two main reasons; one being the number of
data samples obtainable and the second is the observational noise. As it was seen
in the previous chapters, from the introduction of the entropy to measure
complexity of dynamical systems by Kolmogorov and Sinai (K-S entropy [2]), all the
efforts (for example K2 entropy [3], Approximate entropy [4], Sample entropy [6])
are mainly focused on how to tackle those two major issues in experimental data,
and to represent complexity of the system meaningfully.
In this chapter, two new entropies derived by Graham [7] are presented, which are
capable of characterizing the complexity of systems using experimental time series
data. The main objective of these two entropy algorithms is to have improved
computational efficiency while characterizing the system complexities meaningfully.
In-order to achieve this improved computational efficiency, the core of the two
entropy calculation algorithms are based on a method called relative coarse
quantization of time series data. This coarse quantization of data samples drops
the computational times abruptly over other entropy methods, as it was seen during
the computational simulations.
In section 1.1, coarse time series quantization method is presented, then the two
entropy measures; vtz. Quantized Dynamical Entropy (QDE) and Quantized
Approximation of Sample Entropy (QASE) are presented in the consequent sections
of 2.2 and 2.3. In Chapter 3, these two algorithms are analyzed in conjunction of
two other major widely used entropies, Approximate entropy (ApEn) and Sample
Entropy (SampEn). It was considered important to analyze these entropy measures
on the effect of data length (convergence of entropy values with number of data
samples), robustness to observational noise and finally the computational efficiency.

2.1 Coarse Time series Quantization and Vector Identifiers


The method of coarse time series quantization, quantize the time domain data in to
whole numbered bins, which is similar to the idea in quantum theory, quantum
packets of energy. Then these quantized data are given particular identification (a
value), called vector identifiers. The next step is to count number of vector
identifiers similar in value, and both QDE and QASE relies on these number of vector
identifiers when it comes to entropy calculations for the system.
First, the time series can be defined as;

X ={ x ( i ) ,i=1,2, . N } , N number of data samples

Page | 10

The coarse quantization is done using a strictly positive parameter r (quantization


coarseness parameter). This parameter defines the size of the quantization bin.
Then quantized time series is,

X q =

X min ( X )

r>0

Where min(X) is the minimum value of the time series and

which is the floor

function and it rounds off the value inside the function (to the nearest whole
number) towards the negative infinity.
Then these quantized data samples are grouped (called vector groups) using an
embedding dimension m (m N 1 ) . This embedding dimension determines the
length of the vector. Lets call these vector groups as
The vector identifiers (

V j where 1 j N m+1.

j ) are defined on these vector groups in the next step. It

is done in the following way,


m

j= V j ( i ) h
i=1

i1

, where { 1 j Nm+1 }h=max ( X q)

Then it is possible to define the number of occurrences of each identifier as;

Q ( j )= { j1 j Nm+1, [ X q ( j ) , X q ( j+1 ) , .. X q ( j+m1 ) ] j


Now, depending on these numbers of presence of each vector identifier, it is
possible to construct the two entropies quantized dynamical entropy and quantized
approximation of sample entropy. Since QDE is in its natural form, its described first
in the following section.

2.2 Quantized Dynamical Entropy


The number of occurrences of vector identifiers can be used in conjunction with
Shannons entropy to define the probability distributions of data samples. More
predictable and regular time series data will have more numbers of the same vector
identifiers which would lead to low entropy value and on the other hand, irregular
and unpredictable time domain signal will lead to many vector identifiers with lesser
repetition of the same identifier value leading to higher QDE.
First it is beneficial to restate the Shannons entropy formula,
Page | 11

H= pi log p i
i=1

Then probability distribution of each identifier can be given by,

Q ( j)
N m+1
( j)=
p
Therefore, Quantized Dynamical Entropy,

H (m ,r )

can be defined as follows,

p( j)
p ( j) log 2
H ( m, r )=
j

The unit of QDE is bits due to the base 02 of the logarithm value.
Since calculation of QDE depends on all finite parameters compared to the difficult
and data sensitive limit calculations in some entropy calculations such as KS
entropy, QDE paves the way to much easier calculations of entropy measures.

2.3 Quantized Approximation of Sample Entropy


Similar to QDE, numbers of occurrences of vector identifiers are used to
approximate sample entropy. Sample entropy is calculated using the number of
vectors Xim matches with another vector Xjm, they are considered to be matching if
the Chebyshev norm between the two vectors is less than a pre-determined value
r. In QASE, two vectors are considered to be matching each other if their
corresponding vector identifier values are the same.
This approach can be illustrated with the help of a hypothetical time series. The
following figure (Figure 01) illustrates two hypothetical vectors (one in red circles
and the other in orange diamonds) with the embedding dimension m being set to
03.

Page | 12

Figure 01. Comparison of Sample Entropy and QASE [7]


In sample entropy two vectors are matched according to the Chebyshev norm.
Therefore, two vectors shown above will be considered as a match in sample
entropy calculation, because the maximum distance among the vectors occur at the
first point, and it is less than r. In QASE, the quantization parameter is chosen as 2r;
therefore the two vectors will be a match because both vectors will be assigned
same vector identifiers.
It is necessary to note that, the above peer to peer agreement among sample and
QASE will occur for the perfectly centered data within the quantized bins. If thats
not the case false negatives and false positives can occur, and it is illustrated in
Figure 02.

Page | 13

Figure 02. False positives and false negatives in QASE [7]


False positives can occur when two data points are at two extremes of the same
quantized bin as shown in Figure 02. Then those two data points will be a match in
QASE (because the distance is less than 2r), but it will be a mismatch in Sample due
to more than r distance between two points. False negative occurs when two points
closer than r (then match in Sample) are happen to be allocated in two bins. Then
those two points will produce different vector identifier values hence mismatch in
QASE. There will be similar number of false negatives and false positives in evenly
distributed data. But for other data distributions QASE is an inaccurate measure
deviated from real SampEn value.
Now, sample entropy calculation can be mentioned again as follows,

SampEn ( m , r , N ) =ln

A m+1 (r )
B m (r )

]
m

B ( r )=(N m)

Where

N m
1

Bmi (r )
i=1

m +1

Nm

( r ) =( N m)1 A mi +1( r)
i=1

of vectors X mi within r of X mj
B ( r )=
N m1
m
i

Page | 14

m +1
i

of vectors X mi +1 withinr of X m+1


j
(r )=
N m1

Then, SampEn vector matches are approximated with number of vector identifiers
in order to come up with the QASE.

Q( m)1
Q( m)
Nm1

^ m (2 r ) =(N m)1
B
m

Q( m+ 1)1
Q( m+1)
Nm1

^
A m+ 1 ( 2 r )=(N m)1
m +1

The self matches of vector identifiers are negated with deducting one from number
of occurrences of vector identifiers. Then QASE can be derived as,

^
Am +1 (2 r)
QASE ( m ,2 r )=ln
^ m (2r )
B

Page | 15

CHAPTER 3
COMPARISON

ENTROPY

FEATURES

ANALYSIS

AND

The two Grahams entropies are analyzed with Approximate entropy and Sample
entropy. The effect of data length (convergence of entropy values with number of
data samples), robustness to observational noise and finally the computational
efficiency were analyzed.
For the analysis, four MatLab programs were implemented for ApEn, SampEn, QDE
and QASE.

3.1 Dependence on Data Length


A dynamic model was created with addition of Gaussian noise to a sinusoidal time
series (Figure 03). The Gaussian noise was 0.2 STD (standard deviation) and this
noisy irregular time series was used to test the four entropies on dependence on
data length. Figure 04 illustrates the convergence of entropies.

Page | 16

Figure 03. Time series data

Figure 04. Dependence on Data length


It can be seen that both ApEn and QDE have similar trends and they converge
around 1500 data samples. SampEn and QASE show convergence after 500 data
points, and they have better convergence properties over ApEn and QDE. Among
four entropy measures, sample entropy has the best convergence properties. The
superiority of SampEn over ApEn on data length has been noticed by Richman and
Moorman [4] also.

3.2 Robustness to Observational Noise


The Logistic model with different levels of Gaussian noise was used to simulate the
effect of observational noise on entropy algorithms.
The Logistic model used was,

x i+1=r x i ( 1xi )

and parameter r was considered

from 3.5 to 4. Four Gaussian noise levels were used; 0.00 (noiseless), 0.02, 0.06,
Page | 17

Entropy Value

0.20 STD. Noiseless model was used as the benchmark to analyze each entropy
measures with noisy data. The following figures illustrate the behavior of each
entropy under noiseless and three noise levels.

Entropy Value

Figure 05a. Approximate Entropy with Noise

Entropy Value

Figure 05b. Sample Entropy with Noise

r
Page | 18

Entropy Value

Figure 05c. QDE with Noise

r
Figure 05d. QASE with Noise
From Figures 05.a-d it can be seen that all entropy values gets higher as the noise
levels go up. SampEn and ApEn have very close behaviors and it is impossible to
say any significant difference among them by just observing the graphs. May be an
error value calculation between noisy plots and the noiseless plots would give a
significant difference between two plots. It is possible to deduce from the graphs
that both Sample and Approximate entropies are almost robust to small levels of
noise. In real situations the practical noise levels would range in-between 0 0.06
STD and therefore 0.20 level is doubt to be observed in real situations. But that high
level of noise was used to check the robustness of the entropies at higher noise
levels.
In contrary, QDE and QASE are not as robust as the aforementioned entropies. But
for lower noise levels (0.02, 0.06 STD) they both have not deviated much from the
original noiseless shape of the graph. But it is possible to say that QASE is more
robust than QDE and QDE has lost its robustness to noise at 0.2 STD level, while
QASE managed to keep the basic shape of the plot.

3.3 Computational Efficiency


Then the four entropies were tested on their computational efficiencies over number
of data samples. Again the same logistic model was used with logistic parameter r
set at 3.5. The simulations were done in a computer with Intel Core i5 (2.26 GHz)
CPU and 8 GB of RAM. During all four simulations it was tried to keep all the
computational power dedicated to the simulations without any biasedness toward
any entropy.
Page | 19

Figure 06. Computational times with Data Length


The major objective on the development of QDE and QASE is proven markedly in the
Figure 06. Both Grahams entropies are way faster (about 200 times for 9600 data)
than Sample and Approximate entropies. This agrees with the observations done by
Graham [5] too. More hectic models with higher number of data samples would able
to distinguish the computational efficiencies between QDE and QASE.

CHAPTER 4 CONCLUSIONS
Both sample entropy and quantized approximation of sample entropy shows better
convergence with data samples over approximate and quantized dynamical
entropies. Considering SampEn and QASE, it was observed that SampEn performs
slightly better than the latter. Therefore it can be concluded that in terms of
convergence performances, sample entropy performs the best and QASE follows
that.
Approximate entropy and sample entropy both show robust characteristics over
observational noise. At the maximum noise level among the tests (0.2 STD of
Gaussian noise), SampEn performs slightly better than ApEn. QDE and QASE show
Page | 20

lower robustness levels compared to aforementioned entropies and QDE loses


robustness completely at the highest noise level tested of 0.2 STD. Comparing all
the trends visually, SampEn is the most robust to noise.
Grahams two entropies; QDE and QASE win over other two existing entropies in
large distances, for their computational efficiencies. This observation proves the
achievement of main objective of developing these two novel entropies.
After considering all these facts, SampEn is the best entropy measure if the
computational time is not a factor to be considered. If the computational time is
very important in certain practical calculations, then QASE is the best entropy to be
use, which is an approximation of SampEn.
As a note of summary, the development of entropy measures on dynamical
systems from the initial layout of Kolmogorov to the development of
Grahams entropies can be expressed in this way. In the early stage of
entropy measures, true complexity or rate of information generation was
tried to measure through methods which were not capable of producing
good results with lesser data lengths and noisy data samples. Then after
gradual development, several entropies such as ApEn and SampEn paved
the way to tackle short time series and noisy data. Still these entropy
measurements are capable of meaningful approximation of true
complexity levels of the dynamical systems. Then Grahams work is an
attempt to improve the computational efficiency of these entropies while
maintaining the ability to approximate true complexity measures,
meaningfully.

References
[1] C. E. Shannon, A mathematical theory of communication, SIGMOBILE Mob.
Comput. Commun. Rev. 5, 1, (2001).
Page | 21

[2] Y.G. Sinai, On the Notion of Entropy of a Dynamical System, Doklady of


Russian Academy of Sciences, 124, pp. 768-771, 1959
[3] P. Grassberger and I. Procaccia, Estimation of the Kolmogorov entropy from a
chaotic signal, Physical Review A, 28(4), pp. 2591-2593, October 1983.
[4] S.M Pincus, Approximate entropy as a measure of system complexity,
Proceedings of the National Academy of Sciences of the USA, 88(6), pp. 2297301,
1991.
[5] J.P. Eckmann, and D. Ruelle. "Ergodic theory of chaos and strange attractors."
Reviews of modern physics 57.3 (1985).
[6] J.S. Richman and J.R. Moorman, Physiological time-series analysis using
approximate and sample entropy, American Journal of Physiology: Heart and
Circulatory Physiology, 278(6), pp. 203949, 2000.
[7] G. Leverick, Entropy Measures in Dynamical Systems and Their Viability in
Characterizing Bipedal Walking Gait Dynamics, Master of Science Thesis report,
University of Manitoba, 2013.

Page | 22

You might also like