FYP Thesis

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 83

Smart Mobility Walker using BCI and Remote Control

DE – 35 (MTS)
HARIS, SHEHROZ,
WALEED,
ZAEEM

COLLEGE OF
ELECTRICAL AND MECHANICAL ENGINEERING NATIONAL
UNIVERSITY OF SCIENCES AND TECHNOLOGY RAWALPINDI
2017

1
DE – 35 (MTS)
PROJECT REPORT

BCI CONTROLLED ROBOTIC GAIT TRAINER

Submitted to the Department of Mechatronics Engineering in


partial fulfillment of the requirements
for the degree of
Bachelor of Engineering in
Mechatronics

2017

Sponsoring DS: Submitted By:


Dr. Mohsin Tiwana Haris Sohail
Brig. Dr. Javaid Iqbal Shehroz Khalid
Waleed Pervaiz
Zaeem Khan

I
ACKNOWLEDGEMENT

First of all, we are thankful to Allah almighty for our good health and wellbeing which
was essential for successfully achieving our deliverables.

We want to express our sincere thanks to our supervisor, Dr. Mohsin Islam Tiwana, for
his guidance and unending support throughout the course of the project. Without his
help, we would not have been able to achieve success. He provided us research
facilities and direction to work in a systematic way.

We would also like to thank Brig. Javed Iqbal, Dean of the faculty, for his words of
encouragement that always motivated us to achieve the unthinkable and persevere to
achieve greatness.

We also thank Sir Usman Asad, Degree Coordinator, for being a helping hand
whenever we faced issues related to anything. He has been a great support and his push
drove us to stay put till the end.

We also thank our parents for continuous motivation, prayers and moral support.
Lastly, we would like to thank our colleagues who helped us overcome any problems
we faced through the project.

II
ABSTRACT

The commercially available standard walker is useful in helping patients but inefficient
for patients with weak upper limbs. Moreover, the current solutions for people
suffering from Spinal Cord Injury are not based on rehabilitation. In this paper, we
propose a smart walker for human gait training. The walker will be controlled by either
the brain commands of the user; the Brain Computer Interface (BCI), android
Smartphone or Manual control through a joystick or buttons. The embedded system
used for doing all the processing is any simple android phone. After acquiring the data
through the Emotiv EPOC headset, it is processed on android and then afterwards these
features are translated into the motion commands of the motor through a
microcontroller.

This walker will aim at providing an alternative to the currently available solutions
which fall short when it comes to rehabilitation and recovery. Patients with SCI have
damaged neural pathways and are unable to send their motor instructions to lower
proximities. By using BCI the walker will help in rehabilitation of the patient and
recovering the damaged neural pathways. The walker was controlled using SSVEP
visual stimuli for real-rime applications. This Smart walker will give a sense of
independence to the patient and can find applications in rehabilitation centers as well
as nursing homes.

III
Table of Content
ACKNOWLEDGEMENT ............................................................................................................... ii
ABSTRACT................................................................................................................................. iii
LIST OF FIGURES .......................................................................................................................vii
Chapter 1 - INTRODUCTION ...................................................................................................... 1
1.1 Signal Acquisition ............................................................................................................ 1
1.1.1 Frequency bands ...................................................................................................... 2
1.1.2 Signal acquisition device .......................................................................................... 3
1.2 Signal Preprocessing and Feature Selection ................................................................... 6
1.2.1 Subsampling ............................................................................................................. 6
1.2.2 Frequency filtering ................................................................................................... 6
1.2.3 Channel Scaling ........................................................................................................ 6
1.2.4 Common Spatial pattern .......................................................................................... 6
1.3 Classification ................................................................................................................... 7
Chapter 2 - LITERATURE REVIEW ............................................................................................ 8
2.1 Classifiers ........................................................................................................................ 8
2.1.1 Linear Discriminant Analysis (LDA) .......................................................................... 8
2.1.2 Quadratic Discriminant Analysis (QDA) ................................................................... 8
2.1.3 Logistic Regression (LR) ............................................................................................ 8
2.1.4 Multi-Layer Perceptron (MLP) ................................................................................. 9
2.1.5 Gaussian Mixture Analysis (GMA)............................................................................ 9
2.2 Embedded systems ......................................................................................................... 9
2.2.1 Intel Compute Stick .................................................................................................. 9
2.2.2 Beagle Bone ........................................................................................................... 10
2.2.3 Arduino .................................................................................................................. 10
2.2.4 DS PIC ..................................................................................................................... 10
2.3 Feature Extraction......................................................................................................... 10
2.3.1 Fourier Transform .................................................................................................. 11
2.3.2 Linear Predictive Coefficients ................................................................................ 11
2.3.3 Wavelet Packet Decomposition ............................................................................. 11

IV
2.3.4 Logarithmic Band Power ........................................................................................ 11
2.3.5 Energy .................................................................................................................... 11
2.4 Walkers ......................................................................................................................... 11
2.4.1 Fixed Height Walker ............................................................................................... 12
2.4.2 Variable Height Walker .......................................................................................... 12
2.4.3 Wheeled Walker .................................................................................................... 13
Chapter 3 - HARDWARE .......................................................................................................... 14
3.1 SolidWorks Design and CAD Model .............................................................................. 14
3.2 Drive Mechanism .......................................................................................................... 15
3.3 Tires............................................................................................................................... 16
3.4 Portability...................................................................................................................... 16
3.5 Height Adjustment ........................................................................................................ 16
3.6 Analysis ......................................................................................................................... 16
3.7 Clutching ....................................................................................................................... 18
3.7.1 Types of Clutches ................................................................................................... 18
3.7.2 Clutching in Walker ................................................................................................ 20
3.7.3 SolidWorks of Clutching ......................................................................................... 21
3.7.4 Ansys of Clutching .................................................................................................. 22
Chapter 4 - ELECTRONICS........................................................................................................ 26
4.1 H-BRIDGE ...................................................................................................................... 26
4.2 Buttons .......................................................................................................................... 27
4.3 PCB Board...................................................................................................................... 27
4.4 Battery........................................................................................................................... 28
4.5 Motors........................................................................................................................... 28
4.6 MPU6050 ...................................................................................................................... 28
Chapter 5 - SOFTWARE ........................................................................................................... 29
5.1. Methodology................................................................................................................ 29
5.2 Signal Acquisition .......................................................................................................... 30
5.3 Data Recording.............................................................................................................. 31
5.4 Signal Processing ........................................................................................................... 32
5.4.1 Temporal Filter....................................................................................................... 32
5.4.2 Chebyshev Filter..................................................................................................... 32

V
5.4.3 Butterworth Filter .................................................................................................. 33
5.4.4 Surface Laplacian ................................................................................................... 34
5.4.5 Common Spatial Filter............................................................................................ 36
5.4.6 Epoching ................................................................................................................. 37
5.4.7 Logarithmic Band Power ........................................................................................ 38
5.5 Classifiers ...................................................................................................................... 38
5.5.1 Linear Discriminant Analysis .................................................................................. 39
5.6 Support Vector Machine ............................................................................................... 43
5.6.1 History .................................................................................................................... 44
5.6.2 Definition ............................................................................................................... 44
5.6.3 Nonlinear Separability............................................................................................ 45
Chapter 6 - ANDROID .............................................................................................................. 48
6.1 Introduction .................................................................................................................. 48
6.2 Architecture .................................................................................................................. 48
6.3 Latest Version ............................................................................................................... 49
6.4 Market Features............................................................................................................ 50
6.4.1 Competitors ........................................................................................................... 50
6.4.2 Market share .......................................................................................................... 50
6.5 Development of Android App ....................................................................................... 51
6.5.1 Activity: .................................................................................................................. 51
6.5.2 View: ...................................................................................................................... 52
6.5.3 XML ........................................................................................................................ 52
6.5.4 Intent...................................................................................................................... 53
6.5.5 Android Manifest ................................................................................................... 53
6.5.6. Objective: .............................................................................................................. 53
6.5.7 The Project: Walk-Aid ............................................................................................ 53
Chapter 7 - RESULTS................................................................................................................ 56
CONCLUSION........................................................................................................................... 61
REFERENCES ............................................................................................................................ 62
ANNEXURE A ........................................................................................................................... 65
ANNEXURE B ........................................................................................................................... 67

VI
LIST OF FIGURES

Figure 1: Electrode placement on the scalp ............................................................................. 2


Figure 2: The EPOC headset by Emotiv (Left). Electrode placement on the head (Right) ........ 3
Figure 3: Control Panel of the research SDK by Emotiv ............................................................ 4
Figure 4: Test Bench of the research SDK by Emotiv showing Raw Values .............................. 5
Figure 5: Plotting of AF3 after application of FFT with HANNING window on Test Bench ...... 5
Figure 6: Commercially available fixed height walker ............................................................ 12
Figure 7: Variable height walker available in the market ....................................................... 13
Figure 8: Wheeled Walker available in the market ................................................................ 13
Figure 9: Model of the Walker designed in SolidWorks ......................................................... 14
Figure 10: Worm Geared motor coupled with the wheel ...................................................... 16
Figure 11: Analysis shown using ANSYS shows the forces acting on the walker .................... 17
Figure 12: The static structural analysis being shown using ANSYS that models equivalent
stress ....................................................................................................................................... 17
Figure 13: The static structural analysis being shown using ANSYS that models equivalent
strain ....................................................................................................................................... 18
Figure 14: Electronic Clutches................................................................................................. 20
Figure 15(a): Mechanical Friction Clutches............................................................................. 20
Figure 15(b): Mechanical Friction Clutches ............................................................................ 21
Figure 16(a): SolidWorks of Clutching..................................................................................... 21
Figure 16(b): SolidWorks of Clutching .................................................................................... 21
Figure 16(c): SolidWorks of Clutching ..................................................................................... 22
Figure 17(a): Stress Analysis of Walker Shaft ......................................................................... 22
Figure 17(b): Stress Analysis of Walker Shaft ......................................................................... 23
Figure 17(c): Deformation Analysis of Walker Shaft ............................................................... 23
Figure 17(d): Deformation Analysis of Walker Shaft .............................................................. 24
Figure 17(e): Deformation Analysis of Walker Shaft .............................................................. 24
Figure 17(f): Deformation Analysis of Walker Shaft ............................................................... 25
Figure 17(g): Deformation Analysis of Walker Shaft .............................................................. 25
Figure 18: Block/flow diagram depicting the working our system ......................................... 26
Figure 19: Proteus design of the Schematic of H-Bridge ........................................................ 27
Figure 20: Button switch on walker ........................................................................................ 27
Figure 21: ARES design of the PCB Layout .............................................................................. 28
Figure 22: Block diagram depicting the classification of the BCI ............................................ 30
Figure 23: The timing scheme followd to record data............................................................ 31
Figure 24: The box properties of Temporal Filter ................................................................... 32
Figure 25: Formula used in Chebyshev Filter .......................................................................... 33
VII
Figure 26: Output response after Chebyshev filter ................................................................ 33
Figure 27: Formula used in Butterworth Filter ....................................................................... 33
Figure 28: Output response after Butterworth Filter ............................................................. 34
Figure 29: Comparison of EEG data: Butterworth filter(Right) and (Left)without filter ......... 34
Figure 30: Box Configuration of the Surface Laplacian ........................................................... 35
Figure 31: Channel selector box configuration ....................................................................... 36
Figure 32: CSP trainer box configuration settings .................................................................. 37
Figure 33: Epoching box configuration settings...................................................................... 38
Figure 34: OpenVibe: Logarithmic band power applicationg using Simple DSP..................... 38
Figure 35: Example graph of the LDA ..................................................................................... 42
Figure 36: Example analysis table of the LDA ......................................................................... 42
Figure 37: Class separation using SVM ................................................................................... 45
Figure 38: Example of separations using SVM ........................................................................ 46
Figure 39: Different versions of android ................................................................................. 48
Figure 40: Classifications of the Android ................................................................................ 49
Figure 41: Popularity graph of Android .................................................................................. 51
Figure 42: Flow diagram of an Android app ........................................................................... 52
Figure 43: Creating a project in Android studio ...................................................................... 54
Figure 44: Main Activity layout ............................................................................................... 54
Figure 45: Motion Activity ...................................................................................................... 55
Figure 46: Mode Selection ...................................................................................................... 55
Figure 47: SolidWorks of Clutching Mechanism ..................................................................... 56
Figure 48: Stress Analysis on Clutching Mechanism ............................................................... 56
Figure 49: Stress Analysis on Clutching Mechanism ............................................................... 57
Figure 50: Clutching Mechanism on Walker ........................................................................... 57
Figure 51: Complete Manufactured Walker ........................................................................... 58
Figure 52(a): Unfiltered Signal on Matlab............................................................................... 59
Figure 52(b): Filtered Signal on Matlab .................................................................................. 59
Figure 53: Android Application ............................................................................................... 60

VIII
Chapter 1 - INTRODUCTION

Brain Computer Interface (BCI) is relatively still an area in development in the vast
field of Engineering. More and more work is being carried out to process the brain
signals of the human brain and extract relevant information for use. It is sometimes also
called a Brain-Machine Interface (BMI) because real time BCI systems interact with
machines instead of just computers. BCI makes use of electrodes placed on the scalp
of human brain for reading the signals through a computer. The brain involuntarily
generates various signals that add noise to our required features when reading this data.
There are two types of BCI: 1) Invasive 2) Non-invasive. For the purpose of our study
we will only be discussing the non-invasive type. We apply the techniques of
electroencephalography to record and analyze the data.

1.1 Signal Acquisition


The first step in any BCI system is the acquisition of data through the electrodes placed
on the user’s scalp. It is the process of collecting raw data through the electrodes which
are later processed. The instructions from the human brain are transmitted via neurons
which become excites because of the current flow during synaptic excitation of
dendrites. The electrode sensors are placed on the scalp to measure this electrical
activity. Since the electrodes placed on the scalp measure electrical activity the signal
output is the potential difference of the active electrode and the reference electrode.
The electrode placement on the scalp is extremely important and it is according to the
[1]
10-20 system which is standardized by the American Electroencephalographic
Society. The reference electrodes are placed at Nasion, which is located between the
eyes and Inion which is placed above the neck at the base of the skull. As shown in the
Figure 1, the two planes Median and traverse are drawn between these two points with
an interval of 10% and 20% between them. A letter is used to denote each electrode in
accordance to its placement on the scalp.

1
Figure 1: Electrode placement on the scalp (10-20 system)

[2]
Silver Chloride (AgCl) is commonly used to make the electrodes. To improve the
Electrode-Scalp contact, an EEG gel is used which is basically salinase. These
electrodes are called wet electrodes because they require a conductive material to
acquire the signals. Apart from this gel, other conductive paste or liquid can also be
used that will also reduce the impedance between the scalp and the electrode. In order
to get an accurate signal, the contact between the electrodes and scalp is extremely
[3]
important and the impedance must be in the range of 1-10 kohm . Electrodes made
of titanium and steel are dry electrodes and do not require the application of gel [4].

1.1.1 Frequency bands


The signals acquired through the electrodes lie in varying frequency bands and can be
classified based on this difference in their frequency. These frequency ranges are
Gamma, Beta, Alpha, Theta and Delta in the descending order.

 Delta waves have the range below 4 Hz. These low frequencies are usually recorded
when the user is in deep sleep but these waves can easily be mistaken for other
frequencies transmitted by muscle activity (EMG) because they also have a low
frequency.
 Theta waves lie in the 4 Hz to 7 Hz frequency band. Some of these waves are
observed in adults when fully awaked other they are observed during a state of
drowsiness in adults and children. But these waves are observed the most during
cognitive processes [5] or meditative concentration [6][7].
 Alpha frequencies are attributed to visual processing in the brain. They lie in the 8
Hz to 12 Hz frequency band. During a state of rest i.e. when eyes are closed or the
2
[8]
body is in a calm and relaxed state the amplitude of these waves tends to rise .
Mental exertion has been shown to suppress the activity of Alpha waves according
[9]
to some evidence . Mu rhythms, which lie in the same frequency band, can be
[10][11]
mistaken with Alpha waves . However, Mu rhythm is associated with the
motor activity of the brain.
 Beta waves lie in the 12 -30 Hz frequency band and are observed while performing
weak contraction. Beta waves are associated with the motor activity and although
these waves are symmetrically distributed during a state of rest or when there is no
movement they get desynchronized when there is muscle activity. [12][18]
 Gamma waves lie in the 30 to 100 Hz frequency band and are associated with
perception and specific motor functions. The Gamma waves are observed during the
maximal muscle contraction. [13]

1.1.2 Signal acquisition device


In order to acquire the EEG data from the scalp, The EPOC headset by Emotiv is used.
The EPOC is a wearable device and is fairly easy to use. The device is a 16 channel
headset having 14 wet, active electrodes and 2 reference electrodes. The location of the
active electrodes is given in Figure 2. These locations are denoted by letters and
numbers on their position on the scalp. These include AF3, F7, F3, FC5, T7, P7, O1,
O2, P8, T8, FC6, F4, F8, AF4. The data needs to be wirelessly transmitted for mobile
applications and it is achieved through Bluetooth.

Figure 2: The EPOC headset by Emotiv (Left). Electrode placement on the head (Right) (Emotiv
Testbench)

3
The electrodes used in the EPOC headset are wet electrodes and are detachable from

the headset to provide flexibility. The sensors require a conductive material to improve

contact with the scalp and salinase is used to make the electrodes wet.

The raw data is accessible through a research SDK provided by Emotiv along with its

test bench and Control Panel. The control panel of the SDK is used for testing the

quality of the signals as well as the contact of electrodes. The signal quality and

electrode contact is represented by different colors. The perfect electrode contact

quality is given by the green color as shown in Figure 3. Yellow and Orange denote

poor contact between the scalp and electrodes whereas Red color denotes no contact at

all. All of the channels result in a black color if the two reference electrodes i.e. DRL

and CMS are improperly contacted. Every sensor of the headset provide a potential

difference with its value being electrically subtracted from the value of two reference

electrodes [45].

Figure 3: Control Panel of the research SDK by Emotiv (Emotiv TestBench)

In order to analyze and plot the raw values of the headset and its 14 channels the Test

4
bench is used. It also gives access to the values of gyroscope in the headset.

Figure 4: Test Bench of the research SDK by Emotiv showing Raw Values

Test bench also helps in visualizing the data from each electrode in frequency domain.
This is done by choosing the options between Hanning, Hann, Blackman, Hamming
and rectangle windowing by applying FFT.

Figure 5: Plotting of AF3 after application of FFT with HANNING window on Test Bench

5
1.2 Signal Preprocessing and Feature Selection
In order to get the non-invasive acquisition of signal, the electrodes are placed on the
scalp. The signals reach the electrodes by passing through the skull and then to the
scalp, where the electrodes are placed. Since the signals come from the scalp are week
and are poor in quality they have to be improved by enhancing their quality and
amplifying it along with the elimination the noises in the background. The
preprocessing steps to get the desirable signals are frequency filtering, subsampling,
channel scaling and Common Spatial Pattern

1.2.1 Subsampling
The data in this system must be analyzed in real time in order to control the walker.
The Emotiv EPOC signals are sent at frequency rate of 128Hz. By lessening the
dimensionality of the data, we can reduce the time of processing without any
information loss, thereby reducing the computational load. In order to preserve the
essential information Nyquist rate must be set at a higher value.
1.2.2 Frequency filtering
In order to eliminate 50Hz noise in the power lines, a band pass filter must be used. It
is also implemented to averse the baseline wander in the electrodes. A band pass filter
either Chebyshev or Butterworth must be used because the frequency of each mental
command is different from one another. For instance, the frequency in case of Steady
state visually evoked potential is between 12 Hz to 20Hz. The frequency in case of the
motor imagery lies between 10Hz and 17Hz.

1.2.3 Channel Scaling


In order to get the actual readings coming from the signals of electrodes scaling of
channels must be implemented because the electrodes give increased voltage reading
which is because of increased electrodes resistance.

1.2.4 Common Spatial pattern


Fourier analysis is done to separate various different frequency waves since as
mentioned above these waves are categorized into different bands of frequency.

6
Common spatial pattern method is a technique that transforms the signal into two
classes: a class of minimum variance and a class maximum variance [25].

1.3 Classification
Classification is done after the pre-processing and is the most important step. There are
some set patterns which are followed by the brain which perceives same command and
so classification helps in anticipating the interpretation of those thought patterns. For
instance, EEG record a certain neurological signal the input is compared with the
database with the help of the classifier and then the closest of the match is computed.
[13][15]
It allows brain commands to get deduced .It will not always be 100% accurate
and across various algorithms it will be different. Most of the classifiers available are
based on the following classifiers: Quadratic Discriminant Analysis (QDA), linear
discriminant analysis (LDA), Support Vector Machine and Gaussian mixture Model
(GMM).

7
Chapter 2 - LITERATURE REVIEW

2.1 Classifiers
The data needs to be separated or classified into feature classes, and in order to do that
classifiers are used. The following classifiers are usually used to do the classification:

2.1.1 Linear Discriminant Analysis (LDA)


Linear Discriminant analysis are less complex and also because of their great stability
i.e. the performance of these classifiers is not affected by the minute variation in
training data, these are the popular of them all. In order to differentiate the data,
hyperplanes are used by the LDA and it is most appropriate in the application of real
time BCI systems because of their good results. Features extracted through this method
are a linear combination of the original features [19].

2.1.2 Quadratic Discriminant Analysis (QDA)


Quadratic Discriminant Analysis is done in a way that the feature is assigned to the
[17]
class that it gets matched with highest probability . And in order to achieve this
Naïve Bayes classifier is employed which makes the assumption that every feature has
an independent value which is different from the other features. The calculation of
feature’s posteriori probability is done using Bayes method and then the feature is
assigned to its particular class with the help of Maximum A Posteriori (MAP) rule.
With all the calculations mentioned above, computationally this classifier is, to some
extent, expensive [26].

2.1.3 Logistic Regression (LR)


This classifier is a discriminative type which is used to estimate the posterior
distribution function’s parameters [12]. P(c | x) is the posterior distribution function and
is given by the following equation.

8
2.1.4 Multi-Layer Perceptron (MLP)
Another type of classifier employed is feed forward neutral network which provides
the proof that multi-layer perceptron having neurons in the layer which is hidden, which
is used to estimate any function [12].

2.1.5 Gaussian Mixture Analysis (GMA)


Another classifier which does the classification and assignment of the features to their
class is Gaussian classifier [24]. It uses the technique of comparing the known and the
unknown set of vectors. Gaussian model requires the calculation of the covariance and
mean matrix of the unknown vector. The unknown vector which is closest to the known
vector with respect to the probability will be classified into that class.
Actuation is the next stage which the feature is transmitted to as soon as the output is
received and decoding of the command is done. An algorithm is developed for the
purpose of acquiring the motor motion from the EEG signals after which the
appropriate logic input is given to the H-bridge for driving the motor which translates
to brain controlled motion of the mobility walker.

2.2 Embedded systems


The development of an application is done on platforms known as embedded systems.
Most frequently used embedded systems are the following:

2.2.1 Intel Compute Stick


This is a powerful embedded system which has a Quad Core, Intel atom processor
Z3735F. It has a Windows operating system. A RAM of 2 GB is available on-board for
the analysis and processing of brain signals in real time. It has a memory of about 32
GB for storing the data. Its requirement to run is 5V 2A AC-DC power adapter.

9
Some of the other specifications include 802.11bgn Wi-Fi, a 2.0 USB port and a
Bluetooth 4.0. Its price is around 150$.

2.2.2 Beagle Bone


It is an embedded system having a AM335X ARM cortex A8 processor clocked at
1GHz. A 512 MB RAM is available on-board for processing. Linux is the Operating
system used in Beagle Bone. Other specs include a USB port, a slot for SD Card, and
Ethernet support are some of the features that come with it [29]. Its price in the market
is around 90$.

2.2.3 Arduino
Arduino is a platform which accommodate 54 digital inputs and 16 analog inputs. Its
clock speed is 16MHz and is based on AVR ATMEGA 2560. It has a 128KB flash
memory along with 8KB SRAM and 4KB EEPROM. Its control language is based on
C++. It is quite cheap with a market price of 15$.

2.2.4 DS PIC
DS PIC are available of different kinds. But the features which are mostly common
comprise of a single chip embedded processor having the advantage of high processing
speed at various functionalities makes it a necessity in order to work on BCI systems.
Its price is somewhat similar to Arduino’s i.e. 15$.

2.3 Feature Extraction


There are unique patterns present in the signals differing in classes called features.
Those unique patterns facilitate the classifiers to separate the signals effectively in
order to have a good feature extraction. Selection of a feature extraction method is
probably the single most important factor in achieving high recognition performance
[22]
.
Different techniques are used to do the feature extraction but most commonly used are
the following:

10
2.3.1 Fourier Transform
With the help of Fourier transformation, the signals which are obtained in time domain
are disintegrated into their constituent frequencies. Fourier transform provides the best
possible solution for recognizing the mental thought patterns when received through
Electroencephalography [27].

2.3.2 Linear Predictive Coefficients


Just as the name implies the procedure involves a mathematical operation to predict the
future value of EEG signals by the help of linear functions. Auto correlator parameter
is the commonly used operator which in plain words means root mean square of the
inputs of the signal [28].

2.3.3 Wavelet Packet Decomposition


It is a technique that expresses the signal in the form of an infinite waves, which in turn
allows the multi resolution feature extraction. In BCI systems, it is a common variable
of the feature extraction.

2.3.4 Logarithmic Band Power


For the processing of data in time domain we use this technique. This feature is quite
efficacious in regards with motor imagery commands and is not heavy in terms of
computation.

2.3.5 Energy
There are some thought patterns which can be detected since the brain generates various
energy densities and because of the shift in these energy densities, it is possible to detect
the patterns of the thoughts.

2.4 Walkers
The walker is the final output for which all of the aforementioned things are done to
get it to materialize the thoughts of the brain. Selecting the right walker is necessary
are these are available from small to large, simple to complex.

11
2.4.1 Fixed Height Walker
A simple and common type of walker having no wheels as shown in figure 6. The
height of the walker is fixed which is selected by the user according to his/her
requirements.

Figure 6: Commercially available fixed height walker

2.4.2 Variable Height Walker


This walker has a ‘lambda’ shape. Unlike fixed walker the height of the variable height
walker can be adjusted by changing the screws. This walker also does not have any
wheels coupled to its legs, as shown in figure 7.

12
Figure 7: Variable height walker available in the market

2.4.3 Wheeled Walker


Unlike the aforementioned walkers, this walker has got wheels on it. This walker comes
with both 2 and 4 wheels. It makes it easier to move and also reduces friction and that
is why it stands out on top in choosing the walker for a BCI application.

Figure 8: Wheeled Walker available in the market

13
Chapter 3 - HARDWARE

3.1 SolidWorks Design and CAD Model


The conversion of a commercial walker available in the market into a roller walker by
attaching 2 wheels on it which are driven by the motors attached with it as shown in
the Figure 9. Our proposed walker is shown in this figure which is made by doing
modification on the standard walker available in the market. The two wheels in the
front of the walker is accompanied with two Worm Gear Motors. A battery, which is
rechargeable, will be fit inside a pack from which the motors will be given the power.
A box will be attached to the walker inside which entire electronics will be placed.
In order to avoid the toppling of the walker, this system also comes up with a safety
mechanisms in both Electronic and Mechanical area. A mechanism which would stop
the wheels of the walker whenever the motor stops running. This mechanism is in the
mechanical area called worm gear mechanism. Whereas calculating the angle is in the
domain of the electrical mechanism. When the angle with respect to the ground
becomes large, the motors start to move in that same direction in order to prevent the
toppling.

Figure 9: Model of the Walker designed in SolidWorks

14
3.2 Drive Mechanism
Design requirements are listed below:
 To prevent the walker from toppling otherwise it would be uncomfortable for the
patient and might cause some injuries to them.
 In idle state of the walker, motor must halt the wheels so that it would not freewheel.
 The walker should help the patient to stand, thus acting as a support for the patient.
 It should be able to realign itself whenever the walker is about to topple.

Worm gear drive is opted for the abovementioned requirements.


A worm gear consists of a shaft having a spiral thread that engages with a toothed wheel
and drives it. The movement of the plane is changed by 90 degrees by the worm gear.
A torque is applied to the worm gear by an electric motor. The worm wheel is rotated
against the worm gear. The wheel’s teeth are pushed on by the screw face and finally
the wheel is pushed against the load.
Reasons to choose the worm gears for this design are:
1. Worm gear can be used to reduce the speed or increase the torque because its
reduction ratio (number of teeth on pinion to number of teeth on the gear) is very
high.
2. Worm gear can also be used for self-locking purposes i.e. not capable of reversing
the direction. It is almost impossible to get the worm moving the wheel when force
is applied because of their friction between them i.e. wheel and the worm.

The walker has two wheels up the front on which worm gear motors are attached. The
walker will be stopped in the idle condition because of the self-locking mechanism of
the gears. Furthermore, any other kind of braking system is not required i.e. electrical
or mechanical system. Self-locking would suffice.
An electrical gyroscope is used for the purpose of detecting the change of angle. The
gyroscope used is MPU6050. The motors would move in the direction where the angle
from the ground is greater than a specific limit in order to align itself.
15
3.3 Tires
Since the application of this walker would be in clean, smooth, flat grounds rather than
rough surfaces, tires seem to be the more practical option. Hence plastic tires were
chosen having a diameter of 6 inches which can be used in all flat, smooth surfaces as
shown in figure 10.

Figure 10: Worm Geared motor coupled with the wheel

3.4 Portability
This walker is designed in such a way that it can be folded since it will be going to
different places for shipment and other stuff, so it would be comfortable during
transportation. The primary purpose of designing a drive assembly is that the
installation of this assembly on a commercial walker can be done.

3.5 Height Adjustment


Keeping ease, comfort and safety into perspective, height is a very important issue. The
elbow of a person when he/she is in upright posture, should be at an equal height as
that of the top of the grip. For this purpose, the walker has eight height adjustment
levels.

3.6 Analysis
In order to perform the static structural analysis of the walker, a CAD model was
designed and to perform the aforementioned analysis, ANSYS was used for it. 400N
weight was applied on the walker’s handles considering the average weight of 80kg for
a patient.
16
Figure 11: Analysis shown using ANSYS shows the forces acting on the walker

For the case of zero movement i.e. walker at rest, the support of the walker as well as
the tires were considered a fixed geometry.
6.1005e7 is the peak stress calculated on the tire’s drive shaft via ANSYS. Mild steel
having a yield strength of 2.5e8, is what shaft is made of.

Figure 12: The static structural analysis being shown using ANSYS that models equivalent stress

8.5e-4 is the peak strain value calculated by the ANSYS.

17
Figure 13: The static structural analysis being shown using ANSYS that models equivalent strain

3.7 Clutching
A clutch is a machine member used to connect the driving shaft to a driven shaft, so
that the driven shaft may be started or stopped at will, without stopping the driving
shaft. A clutch thus provides an interruptible connection between two rotating shafts.
They provide a high inertia load to be stated with a small power [20].

3.7.1 Types of Clutches


Mainly clutches are divided into two parts:
3.7.1.1 Friction Clutches
These clutches work on the principle of friction exist in between two rotating shaft
when they come in contact with each other. There are five different types of friction
clutches.
3.7.1.1.1 Cone clutch
These clutches are simple in construction and are easy to disengage, however, the
driving and driven shafts must be perfectly coaxial for efficient functioning of the
clutch. This requirement is more critical for cone clutch compared to single plate
friction clutch. A cone clutch consists of two working surfaces, viz., inner and outer
cones.
3.7.1.1.2 Single plate clutch
It is the most common type of clutch which consists of one clutch plate mounted on the
splines of clutch shaft and a pressure plate. When the engine is running and the flywheel
18
is rotating, the pressure plate also rotates as the pressure plate is attached to the
flywheel. The friction disc is located between the two.
3.7.1.1.3 Multi-plate clutch
As opposed to single clutch plate, in multi-plate clutch more than one clutch plates are
used. As more clutch plates are used, consequently friction surface increase, which
increases the capacity of the clutch to transmit torque.
3.7.1.1.4 Semi-centrifugal clutch
It uses centrifugal force as well as spring force for keeping it in engaged position. The
springs are designed to transmit the torque at normal speeds while the centrifugal force
assists in torque transmission at higher speeds.
3.7.1.1.5 Centrifugal clutch
A centrifugal clutch is a clutch that uses centrifugal force to connect two concentric
shafts, with the driving shaft nested inside the driven shaft. It engages more at higher
speeds. The input of the clutch is connected to the engine crankshaft while the output
may drive a shaft, chain, or belt.
3.7.1.2 Fluid flywheel
Fluid flywheel clutches work on transfer of energy from one rotor to the other by means
of some fluid.
3.7.1.3 Electronic Clutches
The clutches are engaged and disengaged via electric current. We have used these
clutches but the problem we faced that they did not have enough torque, so we opted
for mechanical friction clutches.

19
Figure 14: Electronic Clutches

3.7.2 Clutching in Walker


In this project, we have used friction clutching with a coupling mechanism to engage
and disengage the torque from the wheels.

Figure 15(a): Mechanical Friction Clutching

We have employed 3 gears, of them one is used for the coupling which has the cable
mechanism which pulls the gear to move forward and backward. Another gear is
attached with the wheel which is used to free the wheel through bearing. The third and
the last gear is with the motor. When the coupling gear is retracted, the wheel is free to
move. If the aforementioned gear is not retracted, then the gears of motor and wheel
get meshed. If the motor is not on, the wheels will not move. The gears with the motor
and tire have 30 teeth whereas the small gear i.e. the coupling gear has 12 teeth.

20
Figure 15(b): Mechanical Friction Clutching

3.7.3 SolidWorks of Clutching


The solid works design of the clutching is shown in the figures.

Figure 16(a): SolidWorks of Clutching


Figure 16(b): SolidWorks of Clutching

21
Figure 16(c): SolidWorks of Clutching

3.7.4 Ansys of Clutching


Stress analysis on Ansys software of the clutching of walker.

Figure 17(a): Stress Analysis of the Walker Shaft

22
Figure 17(b): Stress Analysis of the Walker Shaft

Figure 17(c): Deformation Analysis of the Walker Shaft

23
Figure 17(d): Deformation Analysis of Walker Shaft

Figure 17(e): Deformation Analysis of Walker Shaft

24
Figure 17(f): Deformation Analysis of Walker Shaft

Figure 17(g): Deformation Analysis of Walker Shaft

25
Chapter 4 - ELECTRONICS

In order to keep our system portable we are using android as our platform and any
smartphone with an Android OS can be used as an embedded system by mounting it
on the walker. The Smartphones available in the market have a plethora of features and
are more powerful than the embedded boards available. The required processing power
is thus not a problem and Android smartphones come with all the basic functionalities
required for the system to work and connect with other components in the walker. Some
of the features include Wi-fi, Bluetooth Connectivity, USB Connectivity and a micro
SD card slot. The android app will be able to communicate with Arduino in order to
drive the motors according to brain instructions.

Figure 18: Block/flow diagram depicting the working our system

The electronics board mounted on the walker is packed in a black box and includes, H-
Bridge motor driver, power regulators, HC-05 Bluetooth module for connecting
Arduino and Smartphone, and a voltage meter that shows the battery levels to the user.

4.1 H-BRIDGE
It gives voltage to a load at any direction and backward and forward movement of
motors are provided by h-bridge [41]. In order to control the motor’s direction, H-bridge
is used which consists of pairs of PNP and NPN Darlington transistors. In order for the
motor to be turned on in one direction, the PNP (TIP147) and NPN (TIP142) transistors
in diagonal position, are active.
The Base of the Darlington transistors are turned on by the NPN transistor (3904) in
26
common emitter mode (CE) because the base current is very high.

Figure 19: Proteus design of the Schematic of H-Bridge

4.2 Buttons
For manual control of the walker a keypad is mounted on it. Using these buttons there
are 4 states; forward, left, right, stop. The feedback from these buttons is given to
Arduino which then signals H-bridge to move the walker to perform the function
accordingly. Manual control can be used for patients with weak upper muscles
undergoing rehabilitation.

Figure 20: Button switch on walker

4.3 PCB Board


Ares is the software where the electronics board is designed, which is imported from a
software called Proteus. From the ARES, the printed circuit board is made. This board
comprises of H-bridge, Bluetooth module HC-05, Voltage regulation circuitry.

27
Figure 21: ARES design of the PCB Layout

4.4 Battery
Choosing a battery is an important part of electronics section. The specification of
battery should be in harmony with the overall walker. We have chosen a Lipo battery
2800mah having 11.1 V and 3 cells.

4.5 Motors
The motors we used for driving purposes is 12V power window motors. 25A Max
stall current with a stall torque of 8.82N.m. The no load speed motors provide is
95RPM with 15RPM tolerance. The no load current is 2A Max. It is usually used in
cars.

4.6 MPU6050
A sensor with a single chip comprising of MEMS gyro and accelerometer. Its
accuracy is because of 16 bits analog to digital conversion hardware for each channel.
It has been employed in the project to avoid the risk of toppling of the walker.

28
Chapter 5 - SOFTWARE

5.1. Methodology
The data has a variety of gestures all mixed and every specific feature is to be
individualized with a process known as EEG signal translation. But the problem lies in
the gestures which are physical in nature i.e. clenching jaws, neck muscle flexion or
blinking. These physical gestures are responsible for generating a higher electrical
spike. These spikes are greater than the brainwave data acquired originally. This in turn
causes problem in the software during analyzing and processing of the signal. For
instance, we are required to get data of motor imagery via EEG signal. During the
extraction of the data, the subject starts blinking the eye or clenching the jaw, what
happens is that it changes the composition of the data. The motor imagery data is now
changed into different signals. In order to offset this problem, we need to differentiate
these two signals and the thought process in frequency domain. The problem can be
solved by the filters. The filters can clean up the raw data and take that frequency band
as input.
As mentioned above, the band pass filters which can effectively separate the frequency
band are Chebyshev or Butterworth. For this use, Butterworth is chosen because it does
not increase or amplify neither decrease or attenuate the signal. Also, it is easy to use
and implement, having a range of 8-24 Hz. The information pertinent to the execution
of the motor imagery comes under the bands of alpha and the beta. Butterworth band
pass filter covers both of the aforementioned bands.
Even though the process of frequency based filtration is done, due to the high
fluctuation rate of raw data, the information which is required cannot be accurate
because the separation is still very much difficult because of the fluctuation. Different
kinds of spatial filters needed to be implemented in order to cater for the problem.
Surface Laplacian is one of those spatial filters that are implemented. The sensors in
surface laplacian are given particular weightages. In order to process the relevant chunk
of data, the subtraction of the factors which contain C3 and C4 common errors is done.
Also, according to the requirement, the signal is meticulously analyze and divide which
29
is done via time based epoching. With the help of time based epoching, the
amplification of minute changes is possible that certifies that the entire data was
analyzed. After doing all of this, the epoched chunks are formed into a single feature
vector. The classifier takes these epoched chunks as its input. The final step is the work
of classifier. The main function of classifier is the storage of signal data into different
classes. In our case, ordinary motion is attained by four basic classes. The way the
classification of different data is done by showing arrows to the patient, which are often
in the form of a set and is requested to envision the motion. The system’s most vital
part is the classifier training. The data is taken from ten different subjects, which all
have different gender and different age. After the time interval, which is preset, the
generation of command sequence is done. The aforementioned sequence is for the
subject which contains the rest periods and the collection of data done effectively is
certified by the arrow generation with haphazard directions.

Figure 22: Block diagram depicting the classification of the BCI

5.2 Signal Acquisition


The Emokit Library is used that connects to the Emotiv EPOC headset and accesses
the raw data of the headset through Research Edition SDK. These raw values are sent
to the android Smartphone through the Bluetooth where they are processed.
Frequency of Emotiv EPOC headset is 128Hz which actually means 128 raw
values/second.

30
5.3 Data Recording
In order to keep our experiments simple, the dataset recorded was of 10 subjects of
different gender and age groups for two class motor imagery. Two datasets per subject
were recorded in the morning in order to avoid any work and stress related
contamination in our data. Subjects were requested to relax and keep calm while
focusing on the given task for training. At the beginning of data recording a cross
fixation was used to keep the subject focused at the stimulus and after a period of two
second cues started to appear in the form of right or left arrows. The direction of the
hand gave stimulus to the subject to move their hand according to the direction of the
arrow for an interval of 5 seconds. Subjects were asked to keep their mind focused on
the motion of their hand and ignore any other thoughts. After the initial 5 seconds the
cross disappeared followed by a relaxation period of 2 seconds then fixation cross and
then the cues start to appear and the process continues. The appearance of cues is
randomized in order to predict real life applications and avoid contamination of the
data through the subject’s prediction.

Figure 23: The timing scheme followed to record data

A total of 20 experiments and trials were carried for out for each class per subject
resulting in a total of 80 trials of each class per subject. The data was recorded into
CSV file which can be easily read on an android. This CSV file contains raw data along
with the stimulations of every performed trial.

31
5.4 Signal Processing
This stage allows the successful classification of the BCI data into target classes using
[32]
different techniques . For instance, on our case where we are considering motor
imagery we can use this to train the classifiers for recognizing carious patterns in the
data considering motion of left hand, right hand or imagined motion. After the classifier
has been trained it automatically classifies new and unforeseen data into their
respective left or right-hand classes.

5.4.1 Temporal Filter


The purpose of this filter is to remove and attenuate certain frequencies that are not of
[34]
interest . The filter allows or stops the certain frequencies in our data according to
[16]
the requirement and use . Temporal filters can exploit the correlation of frames to
achieve high noise attenuation [23]. The temporal filters have four basic types including:
high pass, low pass, band stop and band pass filters. In low pass filters the frequencies
below a certain threshold in our data are allowed to pass whereas in high pass filters
frequencies above this threshold are allowed to pass and below the threshold are cutoff.
As the names suggest the band pass filters allow frequencies in a fixed range to pass
through whereas the band stop does the opposite and removes the frequencies in this
range and allows the frequencies that lie outside this band to pass through. We have
implemented two filters which are called Butterworth and Chebyshev filter.

Figure 24: The box properties of Temporal Filter

5.4.2 Chebyshev Filter


Chebyshev filter has a ripple of passband > 1.

32
Figure 25: Formula used in Chebyshev Filter

Figure 26: Output response after Chebyshev filter

5.4.3 Butterworth Filter


The very first time this filter was described in 1930 by a scientist in Britain, paper was
[35]
on the theory of filter amplifiers . In this filter the ripple of passband = 1, which
means flat response is get via this filter [21].

Figure 27: Formula used in Butterworth Filter

33
Figure 28: Output response after Butterworth Filter

This eliminates the powerline noise due to a frequency of 50/60 Hz cause by the AC
power lines. The results which are acceptable and quite brilliant are the ones got at the
frequency, which ranges from 10-17 Hz. This was executed through Butterworth. The
sole reason Butterworth was chosen is that its passband ripple is equivalent to 1. Since
efficient results are obtained in the range of 10-17 Hz so high cut frequency is set at
17Hz and low one is set at 10Hz.

Figure 29: Comparison of EEG data: Butterworth filter (Right) and (Left) without filter

5.4.4 Surface Laplacian


The main reason why the data which we get from the scalp with the help of EEG is not
good and effective because it is not taken straight from the brain. So, that is why it is

34
not easy to achieve the data separation. So, it is imperative that we eliminate the noise
because even doing minor unintentional activities such as blinking of an eye causes too
much noise. Surface Laplacian comes in handy in this case as it can effectively
eliminate the noises.
When we look at the 10-20 system, we see that C3 and C4 regions are the places where
signals are most differentiated for instance, doing the upper limb’s motor imagery.
When a noise is generated because of the upper limb’s motor imagery, not only C3 and
C4 but areas near to them also notice the noise. These areas are Cz, F3, P3, T3, F4, P4,
T4 [39]. The task at our hand is tackled by storing all these values of data coming from
these areas which are also called channels and give them weightage, keeping in mind
the highest discriminatory sensor gets the higher weightage than the others. After
giving every channel its respective weightage, the subtraction takes place where the
weighted values are minus from the weighted discriminatory sensor. One drawback is
that we do not get the whole data. Some of it gets lost but it is a great trade-off since
because of it, separation is not that difficult.

Figure 30: Box Configuration of the Surface Laplacian

The above figure 30 displays the Surface Laplacian box configuration. The first
horizontal bar in the above figure shows the spatial filter coefficients, which gives the
value of every sensor’s weightage. The purpose of the semi-colon in between these
values is the multiplication of every coefficient to the electrode and after the
multiplication addition of it takes place. This configuration shows 2 output channels as
compared to 10 input ones. Since there are 10 input channels, the multiplication takes
place between the input channels and the first horizontal bar coefficients which would
subsequently give the 1st output. Another output is taken in the similar fashion.
35
Channel Selector box chooses the channel.

Figure 31: Channel selector box configuration

C3, C4, FC3, FC4, C5, C1, C2, C6, CP3, CP4 are the 10 input channels respectively.
It can be displayed in the figure 31. Now as for the output, we get two of these channels,
4C3-FC3-C5-C1-CP3 and 4C4-FC4-C2-C6-CP4 respectively. These output channels
are obtained through the application of spatial filter coefficients. These coefficients are
4;0;-1;0;-1;-1;0;0;-1;0;0;4;0;-1;0;0;-1;-1;0;-1,

5.4.5 Common Spatial Filter


In motor imagery-based Brain Computer Interfaces (BCI), discriminative patterns
can be extracted from the electroencephalogram (EEG) using the Common Spatial
Pattern (CSP) algorithm [31]. Surface Laplacian filter is used to reduce the noise which
is done by the reduction of data dimension. The way it all works is by raising the value
of variance in the middle of these data. It helps in the separation of data quite effectively
and with ease.
Common spatial filter is the technique which involves computing the highest variance
among classes and lowest variance in between the class.

36
In order to resolve these equations, we can employ generalized Eigen value problem.
In it, we solve for the augmented coefficients.

Figure 32: CSP trainer box configuration settings

In order to separate and do the classification of an online data, we need to train the filter
which can only be done by training the CSP.

5.4.6 Epoching
The signals cannot be separated by the designed code due to the difficulty in identifying
the patterns because most of the BCI information is under compression. For the
recognition of the patterns, signals in time axis are stretched [40].

37
Figure 33: Epoching box configuration settings

Since we need to expand and stretch the signals and we did that in the axis of time, the
expansion of one second is done as compared to the originally obtained in 1/16 secs.
The development of sixteen windows is achieved. And it is for the one second i.e. one
sec is every window’s length, which encompasses the knowledge of 1/16 sec.

5.4.7 Logarithmic Band Power


In order to excerpt the signal’s features, we employ logarithmic band power. By means
of digital signal processing, the implementation of logarithmic band power is done.
u(x ) = log(1+x2)
output signal = u(x); which we get with the help of logarithmic band power and also
the same case with the sixteen windows, which the epochs develop.

Figure 34: Open Vibe: Logarithmic band power application using Simple DSP

5.5 Classifiers
The purpose of the classifier is to distinguish the incoming data and assign separate
38
classes according to the motion features. The classifier helps in determining the motion
of the person and assigns the features to either the right-hand class or left-hand class
when considering motor imagery [33].

5.5.1 Linear Discriminant Analysis


When the words like the reduction of dimension or extraction of feature, then the most
[38]
popular analysis comes to mind in linear discriminant analysis . This method is
actually a generalization of the linear discrimination method used in patter recognition,
machine learning and statistics. It was developed by Ronald Fisher in the year 1936 [36].
This technique determines the linear combination of features in order to separate two
or more object classes [37].

5.5.1.1 Introduction
This method is fairly simple and mathematically robust. The results produced by this
method are almost as good when compared to the output of more complex techniques.
Dimensionality reduction is performed through this method. This basically means that
we want to reduce the number of variables while preserving as much information in
our data as we can for describing class discrimination. The LDA is applied by following
the steps listed below:

Define groups

Define the function for discrimination

Estimate the discrimination function

Test the discrimination function

39
Apply the function

5.5.1.2 Algorithm
It is a binary classifier and the best results are achieved when separating two classes.
The function as described by Fisher is as follows:
Z= x1β1 + x2β2+ x3β3+….+xdβd
“x” can be denoted as a matrix and “β” can be denoted as vectors or an array. Now we
can separate the data into two separate subsets. Each of these subsets of our data
represents one of them for each class. Now we find the mean of the two subsets as
shown below:
S(β)= µ1βT - µ2βT
Cβ βT
Where µ1 is the mean of first subset and µ2 is the mean of second subset, C is the
covariance matrix and β is a vector of our coefficients

Computation of the coefficients that are not complex in order to have the score as high
as possible. The equation shows how it is done.

When we compute the mahalanobis distance, we can further compute the


efficaciousness of the discrimination. Keeping in mind this aforementioned distance is
computed between 2 groups, which is indicated by a delta sign. When we have got
40
more than 4, this implies that the means averages we have got have a standard
deviations alteration of 4, which in turn is the indication of a very little
misclassification.

When this happens, the classification of another point is done and C1 is the way of
separating it, if:

5.5.1.3 Example
To distinct a bank’s consumer’s dataset, LDA comes to the rescue which helps in
solving it by obtaining a linear model which distinguishes the red and blue dots or
circles, representing classes.

41
Figure 35: Example graph of the LDA

To compute the odds of the class, average vectors and covariance matrices

Figure 36: Example analysis table of the LDA

42
Mahalanobis distance = 2.32

The two classes are separated quite nicely?

5.5.1.4 LDA in Five steps:


1. For each and every class, dimensional mean vectors need to be calculated.
2. To compute the two matrices (i.e. within and between the class scatter).
3. Scatter matrices requirement is to calculate the eigenvectors.
4. The dimensional matrix is developed from the highest eigenvalue when placed
in the descending order.
5. The final step involves the conversion of these samples via dimensional matrix
on a separate subspace.

5.5.1.5 Limitations
 LDA does not work with the problems pertinent to the nonlinear or complex.
 LDA is unable to work when the object’s value in separate classes are
significantly dissimilar.
 Excess of Non-Gaussian distributors is another limitation.
 Discriminatory information should be in the mean of the data, otherwise it will
not work.

5.6 Support Vector Machine


Under the root of similar features and characteristics, the disintegration of these
parameters, the process of classifying it has to be done in that way. Different ways are
developed in order for us to accomplish the duty according to the required specification.
The main task is to differentiate the motor cortex signal and classifying them into 4
different classes. It can be done with the help of confusion matrix. LDA nu-SVM and
SVM (Support Vector Machine) are the classifiers we employed in our system and

43
tested.
The purpose of testing all these classifiers is that so we can know which works the best
in terms of accuracy and easy to use.
The accuracy we get cannot be significantly be changed in terms of just separating
happens after the processing. It can also be affected with the remodeling as well. By
altering these parameters the results i.e., the accuracy can be changed

5.6.1 History
It was the year of 1963, when a two people named Vladimir N. Vapnik and Alexay Ya.
Chervonenkis respectively, for the first time coded for SVM. But the draw back at that
time was it was not successful with the complex separability but for linear it was good
[30]
. But the case for the complex separability did materialize in 1992 when Vladimir
got help from Bernhard E. Boser and Isabelle M. Guyon. They were the first to come
up with Kernel functions. Kernel functions deal with the nonlinear or complex
separability.

5.6.2 Definition
Support Vector Machine (SVM) works by having the input data and then arranging
them according to the alterations it is able to get. Now the subsequent task is
relatively difficult than the first because of the exceptionalities involved which is the
main reason the accuracy gets lowered.

Many classifiers are operated in such a way that they develop borders. These borders
are formed because of the highest separability. SVM is operated such that the inputs
have more space by having the classes to get the maximum area for themselves. The
hyperplane’s width is defined by the support vectors. Support vectors lie at the
borders.

44
Figure 37: Class separation using SVM

5.6.3 Nonlinear Separability


The main apprehensions in working with the nonlinear separability is that the data
cannot be separated linearly, even though the concept is not hard. It can be easily
grasped. If we can get the data at a dimension which is above as compared to the
conventional dimensions, it can be solved and probably it is the only feasible
solution. The importance of the kernel functions lies when separating the data, which
is not linear rather complex, the classes get different as we go to the above
dimensions. SVM had got kernel function mainly to do this task.

The suggestion of the kernel function is that the two classes are separated by the
general function, the width of every class, also the maximum one, get formulated by
the complex functions. By width of every class mean the hyper-plane’s width. A few
samples where Kernel functions are employed in doing the separation of classes.

45
Figure 38: Example of separations using SVM

5.6.4 Mathematical Modeling


There are numerous cases for hyper-plane formulation in regards of and . For
instance, just for keeping things simple and easy to understand, take into account the
case shown below.

In the above case, represents the training example dataset. The boundary of the
hyper-plane is manifested by this training example dataset. This kind of illustration is
called canonical hyper-plane.

Evaluating distance between hyper-plane and point :

In case of a canonical hyper-plane, the numerator is equivalent to 1 hence the distance


comes out to be:

46
Margin , it is two times the distance to the nearest cases:

Maxima of is equal to the minima of function which somewhat have


restrictions. In order for the training examples to be classified, hyper-plane is
required, which is modelled by these restrictions or constraints.

each of the labels of the training examples is denoted by .

47
Chapter 6 - ANDROID

6.1 Introduction
The operating system (OS) used in most of the tablets phones is android. Most popular
examples of it are Samsung, HTC and Sony. Their smartphones and tablets have
android as their operating system. Google, along with Open Handset Alliance
developed the android. Almost all the touchscreen devices i.e. phones, tablets etc. use
android [42]. In this advanced technological world, android is being employed in other
devices as well apart from touchscreen smartphones and tablets, which includes in
television, in automobiles, in wrist watches etc.
In the state of California, from Silicon Valley, Andy Rubin, Nick Sears, Ric Miner, and
[43] [44]
Chris White co-founded Android . In 2005 Google bought the Android OS for
over $50 million at that time.
In the light of open source license, Google releases source code of Android operating
system. Shown below are different updates and versions of android since its inception.

Figure 39: Different versions of android

6.2 Architecture
5 stages or levels of operating system of Android are as follows:
1. The applications of user coded in Java

48
2. C/C++ written library
3. Android Runtime
4. Framework of JAVA
5. Linux Kernel

Figure 40: Classifications of the Android

6.3 Latest Version


To this date, there are 6th versions of android available in all android devices. The
seventh version, which is the nougat, is not available is all devices but soon will be.
The primary attention in every update is given on the APIs, improvement in the user
interface, different functionalities are added etc. For instance, web application is now
possible with every new version of android. Multi-touch is also possible with the latest
version of android.

49
6.4 Market Features
6.4.1 Competitors
Apple iOS is the biggest competition that android faces in this current age. One of the
very reasons android was developed to challenge the Apple. Apart from Apple, android
faces competition from:
 Blackberry
 Symbian OS
 Windows OS (Mobile)

6.4.2 Market share


In the market, 63% is the android’s market share pertinent to smartphones. This
percentage is as per the statistics of the market and its market share is just increasing
every year. Late in 2009, it was calculated that android had the share of 2.8% all around
the globe of the shipment of smart phones [46]
. Android’s market increased
exponentially with market of 33% at the end of 2010 and stood at the platform of
[47]
highest selling smartphone operating system . Before android reached the apex,
Symbian platform was at the top of the game but android soon overtook it and claim
the number one spot [48]. And the mid of 2012 saw android having the whooping 75%
[49]
of the market stated by IDC, a research firm . In terms of usage, android has
surpassed Microsoft Windows in becoming the most widely popular OS [50].

50
Figure 41: Popularity graph of Android

6.5 Development of Android App


Before kicking off to develop an application in android, some of the key points should
be kept in mind.

6.5.1 Activity:
An activity is a user interface and provides a medium of interaction between user and
the app. For example, Messenger in android consists of an activity that lists all of
messages of the facebook users. The instant any message is selected it starts a chain of
communication from one activity to another to show the message. Gmail is another
such application which views email through a chain of tasks performed by interaction
between different activities.
The application cycle in android commenced by calling onCreate(). It is a callback
method. In JAVA file, it is the foremost method. Subsequently, there are some other
tasks pertaining to all this are performed because of further callback methods.

51
Figure 42: Flow diagram of an Android app

6.5.2 View:
Event handling is done because of view which is basically an area on the screen.
Various components of user interface are created in view such as buttons and text fields.
XML is a language in which a text file is written which is arranged by views.

6.5.3 XML
Extensible Markup Language or XML for short is a markup language. XML is
somewhat like HTML, just with some added functionalities. The code shown below is
the XML code for Walk-Aid android app’s layout.
<?xml version="1.0" encoding="utf-8"?>
52
For example, to change the text.

6.5.4 Intent
When one particular action talks to another particular action, it is called the intent.
Intent to talk to is a distinct object which every particular action requires. Communicate
to another object is another name for it. Sending off a message is done by the intent.
Receiving of the message is also done by this and other actions or activities also.

6.5.5 Android Manifest


The Manifest file is related with various activities upon launch of application like
splash activity, which activity or action would initiate on startup, app permissions, and
target android software API etc.

6.5.6. Objective:
The aim was to develop an app that enables the patients free mobility and independence
from assistance through the motion of the walker while sitting anywhere in the home
For the android app development, we can use Eclipse IDE or Android Studio. Before
Android Studio was released in 2015 there was no official SDK for app development
and Eclipse was used for development by integrating it with the Android SDK provided
by google.
We used Android Studio for our development platform as it is the official development
platform provided by google and has a lot of Open Source code available which is easy
to understand and can be modified.
6.5.7 The Project: Walk-Aid
The name of the android project is Walk-Aid.

6.5.7.1 Creating Project in Android Studio


To create new project follow these steps:
go to File > New > New project

53
Figure 43: Creating a project in Android studio

A window will pop up. Provide the necessary information and click Next.

Figure 44: Main Activity Layout

After creating the project, we develop the Walk-Aid app.

6.5.7.2 Application Layout


The layout is designed in XML which is similar to HTML. The layout and code is given
below:
<EditText
android:textColor="#000000"
android:background="#ffffff"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:id="@+id/editText"
android:layout_alignParentTop="true"
android:layout_alignParentRight="true"
android:layout_alignParentEnd="true"
android:layout_alignParentLeft="true"
android:layout_alignParentStart="true" />

54
<Button
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="Begin"
android:id="@+id/buttonStart"
android:layout_below="@+id/editText"
android:layout_alignParentLeft="true"
android:layout_alignParentStart="true"
android:onClick="onClickStart"/>

In XML every element is given an ID which is its unique identifier. For the Forward,
Right, Left motion of walker buttons were created.

Figure 45: Motion Activity

Figure 46: Mode Selection

55
Chapter 7 - RESULTS

In this chapter, the results as well as the output of the project is discussed. Our project was
a smart mobility walker for people who have weak upper limbs. The three modes of our
project i.e. walker controlled manually through buttons, the casing of which is mounted on
the walker, the second mode is HMI, human machine interface in which the patient will
control the walker through his android app and the third mode is brain controlled interface,
taking brain signals and through them control the walker.

The most important mechanism we have employed in our project is the clutching
mechanism. This has been successfully fabricated and implemented in our walker.

Figure 47: SolidWorks of Clutching Mechanism

After designing the solid works of the clutching mechanism, we performed structural
analysis which consists of stress analysis and deformation analysis.

Figure 48: Stress Analysis on Clutching Mechanism

56
Figure 49: Deformation Analysis on Clutching Mechanism

After the designing and structural analysis of the clutching mechanism, it was
fabricated and implemented.

Figure 50: Clutching Mechanism on Walker

The final manufactured walker with all the necessary requirements is shown below.

57
Figure 51: Complete Manufactured Walker

Since the signals we get from the brain via head set needs to be filtered and one of the
filters we have used is butterworth filter. The figure shown below is the unfiltered and
filtered signals which were implemented in matlab software. There are two
commands used i.e. the butter command is for the setting of the coefficient’s values
and filter command is for the filtration.

58
Figure 52(a): Unfiltered Signal on Matlab

Figure 52(b): Filtered Signal on Matlab

59
Development of android app on android studio in order to get the second mode up
and running by creating the layout or the interface of the app as shown below. To
connect the android app with the Bluetooth module so that a patient would control the
walker through their android app.

Figure 53: Android Application

60
CONCLUSION

We have developed a smart walker with three modes successfully tested and
implemented. Firstly, the mode where the walker would be controlled by patient
through buttons was tested and successfully working. The second mode where the
android app was developed in order to control the walker through Bluetooth module.
Finally, the last mode taking brain signals via head set.

The mechanical design is robust and stable after the fabrication and implementation
of clutching mechanism which allows the user with more stability and confidence
when walking on the walker. The toppling of the walker has been successfully curbed
and obstacle avoidance techniques have been implemented.

61
REFERENCES

[1]
Anand, B. K., G. S. Chhina, and Baldev Singh. "Some aspects of
electroencephalographic studies in yogis." Electroencephalography and Clinical
neurophysiology 13.3 (1961): 452-456.
[2]
Jasper, Herbert H. "The ten twenty electrode system of the international
federation." Electroencephalography and Clinical Neuroph siology 10 (1958):
371-375.
[3]
Usakli, Ali Bulent. "Improvement of eeg signal acquisition: An electrical aspect
for state of the art of front end." Computational intelligence and neuroscience
2010 (2010): 12.
[4]
Fonseca, Carlos, et al. "A novel dry active electrode for EEG recording." IEEE
Transactions on Biomedical Engineering 54.1 (2007): 162-165.
[5]
Sinclair, Christopher M., Mason C. Gasper, and Andrew S. Blum. "Basic
electronics in clinical neurophysiology." The Clinical Neurophysiology Primer
(2007): 3-18.
[6]
Aftanas, L. I., and S. A. Golocheikine. "Human anterior and frontal midline theta
and lower alpha reflect emotionally positive state and internalized attention: high-
resolution EEG investigation of meditation." Neuroscience letters 310.1 (2001):
57-60.
[7]
Fernández, Thalía, et al. "EEG activation patterns during the performance of tasks
involving different components of mental calculation." Electroencephalography
and clinical Neurophysiology 94.3 (1995): 175-182.
[8]
Black, A. H. "The operant conditioning of central nervous system electrical
activity." Psychology of Learning and Motivation 6 (1972): 47-95.
[9]
Venables, Louise, and Stephen H. Fairclough. "The influence of performance
feedback on goal-setting and mental effort regulation." Motivation and Emotion
33.1 (2009): 63-74.
[10]
Pineda, Jaime A. "The functional significance of mu rhythms: translating “seeing”
and “hearing” into “doing”." Brain Research Reviews 50.1 (2005): 57-68.
[11]
Brown, Peter, et al. "Cortical correlate of the Piper rhythm in humans." Journal of
neurophysiology 80.6 (1998): 2911-2917.
[12]
Bishop, Christopher M. Pattern recognition and machine learning. springer, 2006.
[13]
Lotte, Fabien, et al. "A review of classification algorithms for EEG-based brain–
computer interfaces." Journal of neural engineering 4.2 (2007): R1.
[14]
Müller, Klaus-Robert, et al. "Machine learning techniques for brain-computer
interfaces." (2004).
[15]
Kaper, Matthias, et al. "BCI competition 2003-data set IIb: support vector
machines for the P300 speller paradigm." IEEE Transactions on Biomedical
Engineering 51.6 (2004): 1073-1076.

62
[16]
Lee, Te-Won, Mark Girolami, and Terrence J. Sejnowski. "Independent
component analysis using an extended infomax algorithm for mixed subgaussian
and supergaussian sources." Neural computation 11.2 (1999): 417-441.
[17]
Navarro, Israel, B. Hubais, and F. Sepulveda. "A comparison of time, frequency
and ICA based features and five classifiers for wrist movement classification in
EEG signals." Engineering in Medicine and Biology Society, 2005. IEEE-EMBS
2005. 27th Annual International Conference of the. IEEE, 2005.
[18]
Pregenzer, Martin, and Gert Pfurtscheller. "Frequency component selection for an
EEG-based brain to computer interface." IEEE Transactions on Rehabilitation
Engineering 7.4 (1999): 413-419.
[19]
Ye, Jieping. "Least squares linear discriminant analysis." Proceedings of the 24th
international conference on Machine learning. ACM, 2007.
[20]
Chan, C. C. "The state of the art of electric and hybrid vehicles." Proceedings of
the IEEE 90.2 (2002): 247-275.
[21]
Ali, A. Soltan, Ahmed Gomaa Radwan, and Ahmed M. Soliman. "Fractional order
Butterworth filter: active and passive realizations." IEEE Journal on emerging and
selected topics in circuits and systems 3.3 (2013): 346-354.
[22]
Trier, Øivind Due, Anil K. Jain, and Torfinn Taxt. "Feature extraction methods for
character recognition-a survey." Pattern recognition 29.4 (1996): 641-662.
[23]
Lee, Seong-Won, et al. "Noise-adaptive spatio-temporal filter for real-time noise
removal in low light level images." IEEE Transactions on Consumer Electronics
51.2 (2005): 648-653.
[24]
Reynolds, Douglas A., and Richard C. Rose. "Robust text-independent speaker
identification using Gaussian mixture speaker models." IEEE transactions on
Speech and Audio Processing 3.1 (1995): 72-83.
[25]
Ang, Kai Keng, et al. "Filter bank common spatial pattern (FBCSP) in brain-
computer interface." Neural Networks, 2008. IJCNN 2008.(IEEE World Congress
on Computational Intelligence). IEEE International Joint Conference on. IEEE,
2008.
[26]
Vidaurre, Carmen, et al. "Study of on-line adaptive discriminant analysis for EEG-
based brain computer interfaces." IEEE Transactions on Biomedical Engineering
54.3 (2007): 550-556.
[27]
Wolpaw, Jonathan R., Dennis J. McFarland, and Theresa M. Vaughan. "Brain-
computer interface research at the Wadsworth Center." IEEE Transactions on
Rehabilitation Engineering 8.2 (2000): 222-226.
[28]
Itakura, Fumitada. "Line spectrum representation of linear predictor coefficients of
speech signals." The Journal of the Acoustical Society of America 57.S1 (1975):
S35-S35.
[29]
Coley, Gerald. "Beaglebone black system reference manual." Texas Instruments,
Dallas (2013).
[30]
Furey, Terrence S., et al. "Support vector machine classification and validation of
cancer tissue samples using microarray expression data." Bioinformatics 16.10
(2000): 906-914.
[31]
Ang, Kai Keng, et al. "Filter bank common spatial pattern (FBCSP) in brain-
computer interface." Neural Networks, 2008. IJCNN 2008.(IEEE World Congress
on Computational Intelligence). IEEE International Joint Conference on. IEEE,
63
2008.
[32]
Orfanidis, Sophocles J. Introduction to signal processing. Prentice-Hall, Inc.,
1995.
[33]
Rodriguez-Bermudez, German, Pedro J. Garcia-Laencina, and Joaquin Roca-
Dorda. "Efficient automatic selection and combination of eeg features in least
squares classifiers for motor imagery brain–computer interfaces." International
journal of neural systems 23.04 (2013): 1350015.
[34]
Dornhege, Guido, et al. "Combined optimization of spatial and temporal filters for
improving brain-computer interfacing." IEEE transactions on biomedical
engineering 53.11 (2006): 2274-2281.
[35]
Aparna, Ch, J. V. R. Murthy, and B. Raveendra Babu. "Energy computation for
BCI using DCT and moving average window for noise smoothening."
International Journal of Computer Science, Engineering and Applications 2.1
(2012): 15.
[36]
Xanthopoulos, Petros, Panos M. Pardalos, and Theodore B. Trafalis. "Linear
discriminant analysis." Robust Data Mining (2013): 27-33.
[37]
Izenman, Alan Julian. "Linear discriminant analysis." Modern multivariate
statistical techniques. Springer New York, 2013. 237-280.
[38]
Ye, Jieping, and Qi Li. "A two-stage linear discriminant analysis via QR-
decomposition." IEEE Transactions on Pattern Analysis and Machine Intelligence
27.6 (2005): 929-941.
[39]
Jayaraman, Vinoth, Sivakumaran Sivalingam, and Sangeetha Munian. "Analysis
of Real Time EEG Signals." (2014).
[40]
Allen, John JB, James A. Coan, and Maria Nazarian. "Issues and assumptions on
the road from raw signals to metrics of frontal EEG asymmetry in emotion."
Biological psychology 67.1 (2004): 183-218.
[41]
Williams, Al. Microcontroller projects using the Basic Stamp. Gilroy, CA: CMP
Books, 2002.
[42]
Developers, Android. "What is android." (2011).
[43]
"Google's Android OS: Past, Present, and Future". PhoneArena. August 18, 2011.
[44]
Elgin, Ben. "Google buys Android for its mobile arsenal." Bloomberg
Businessweek 16 (2005).
[45]
Carrino, Francesco, et al. "A self-paced BCI system to control an electric
wheelchair: Evaluation of a commercial, low-cost EEG device." Biosignals and
Biorobotics Conference (BRC), 2012 ISSNIP. IEEE, 2012.
[46]
Palanivelu, Karthik. "Energy Distribution Through Lifetime Estimation and
Smartphone Usage Patterns." (2014).
[47]
Alto, Palo. "Google’s Android becomes the world’s leading smart phone
platform." 2011-05-20]. http://www. canalys. com/pr/2011/r2011013. html
(2011).
[48]
Victor, H. "Android steals Symbian's top smartphone OS crown." (2011).
[49]
Llamas, R., K. Restivo, and M. Shirer. "Android Marks Fourth Anniversary Since
Launch with 75.0% Market Share in Third Quarter, According to IDC,(IDC)."
[50]
Muñoz, Miguel Moreno. "Privacidad y procesado automático de datos personales
mediante aplicaciones y bots." Dilemata 24 (2017): 1-23.

64
ANNEXURE A

void sonar_stop() void loop() {


{ if (!dmpReady) return;
read_sonar(); while (!mpuInterrupt && fifoCount <
if(distance<30) packetSize) {
{ // waiting }
Serial.println("Stop bruh"); mpuInterrupt = false;
} mpuIntStatus = mpu.getIntStatus();
} fifoCount = mpu.getFIFOCount();
void read_sonar() if ((mpuIntStatus & 0x10) || fifoCount ==
{ 1024) {mpu.resetFIFO();
digitalWrite(trigPin, LOW); Serial.println(F("FIFO overflow!"));
delayMicroseconds(2); } else if (mpuIntStatus & 0x02) {while
digitalWrite(trigPin, HIGH); (fifoCount < packetSize) fifoCount
delayMicroseconds(10); =mpu.getFIFOCount();
digitalWrite(trigPin, LOW); mpu.getFIFOBytes(fifoBuffer,
duration = pulseIn(echoPin, packetSize); fifoCount -= packetSize;
HIGH); #ifdef
distance = duration/58.2; OUTPUT_READABLE_QUATERNION
if (distance >= maximumRange mpu.dmpGetQuaternion(&q,
|| distance <= minimumRange){ fifoBuffer);
Serial.println("-1"); Serial.print("quat\t");
digitalWrite(LEDPin, HIGH); Serial.print(q.w);
} Serial.print("\t");
else { Serial.print(q.x);
Serial.println(distance); Serial.print("\t");
digitalWrite(LEDPin, LOW); Serial.print(q.y);
} Serial.print("\t");
//Delay 50ms before next Serial.println(q.z);
reading. #endif
delay(50); #ifdef OUTPUT_READABLE_EULER
} mpu.dmpGetQuaternion(&q,
fifoBuffer);
mpu.dmpGetEuler(euler, &q);
Serial.print("euler\t");
Serial.print(euler[0] * 180/M_PI);
Serial.print("\t");
Serial.print(euler[1] * 180/M_PI);
Serial.print("\t");
Serial.println(euler[2] * 180/M_PI);
#endif

blinkState = !blinkState;
digitalWrite(LED_PIN, blinkState);
} }
65
void btooth() Void mode {
{
if (Serial3.available()) if (mode_select == 0)
{ {
rx_byte = digitalWrite(46, HIGH);
Serial3.read(); digitalWrite(44, LOW);
Serial3.print(rx_byte); digitalWrite(42, LOW);
Serial3.print("\n"); while (mode_select==0)
if(rx_byte=='1') {
{ F_state = digitalRead(F);
forward(); L_state = digitalRead(L);
} R_state = digitalRead(R);
else if(rx_byte=='2') if (F_state == LOW && L_state == HIGH
{ && R_state == HIGH)
right(); { forward(); }
} else if (F_state == HIGH && L_state ==
else if(rx_byte=='3') LOW && R_state == HIGH)
{ { left(); }
left(); else if (F_state == HIGH && L_state ==
} HIGH && R_state == LOW)
else if(rx_byte=='4') { right(); }
{ else
motor_stop(); { motor_stop(); }

} } } //while } // manual mode


} else if (mode_select == 1)
{ btooth(); } // bluetooth mode
}

66
ANNEXURE B

CSV reader code: Graph Showing Activity:


public void onCreate(Bundle public class graph extends
savedInstanceState) AppCompatActivity {
{ LineGraphSeries<DataPoint> series;
super.onCreate(savedInstanceState); @Override
setContentView(R.layout.main); protected void onCreate(Bundle
spMainSelectCategory = (Spinner) savedInstanceState) {
findViewById super.onCreate(savedInstanceState)
(R.id.spMainSelectCategory); ;
tvMainSelectedCate = (TextView) setContentView(R.layout.activity_g
findViewById(R.id.tvMainSelectedCate) raph);
; double x,y; x=0;
List<String[]> list = new double
ArrayList<String[]>(); a[]={0,1,2,3,4,5,6,7,8,9,10};
String next[] = {}; double
try { InputStreamReader b[]={0,1,2,3,4,5,6,7,8,9,10};
csvStreamReader = new GraphView graph= (GraphView)
InputStreamReader( findViewById(R.id.graph1);
CSVParsingExampleActivity.this.getAss series= new
ets().open("OCategory.csv")); LineGraphSeries<DataPoint>();
CSVReader reader = new for(int i=0; i<11;i++) {
CSVReader(csvStreamReader); x = a[i];
for (;;) { y=b[i];
next = reader.readNext(); series.appendData(new
if (next != null) { DataPoint(x,y), true,11);
list.add(next); }
} else { break; } graph.addSeries(series);
} GraphView graph1=
} catch (IOException e) { (GraphView)
e.printStackTrace(); findViewById(R.id.graph2);
} series= new
for (int i = 0; i < list.size(); LineGraphSeries<DataPoint>();
i++) { for(int i=0; i<11;i++) {
x = a[i];
categoryList.add(list.get(i)[0]); y=b[i];
} series.appendData(new
ArrayAdapter<String> adapter = new DataPoint(x,y), true,11);
ArrayAdapter<String>(getApplicationCo }
ntext(),android.R.layout.simple_spinn graph1.addSeries(series);
er_item, categoryList);
adapter.setDropDownViewResource(andro
id.R.layout.simple_spinner_dropdown_i
tem);
spMainSelectCategory.setAdapter(adapt
er);spMainSelectCategory.setOnItemSel
ected Listener(this); }
@Override
public void
onItemSelected(AdapterView<?> arg0,
View arg1, int arg2, arg3) {
tvMainSelectedCate.setText("You have
selected:" + categoryList.get(arg2)+
" Category"); }
@Override
public void
onNothingSelected(AdapterView<?>
arg0) {
}

67
static double ButterworthFilter public class MainActivity extends
(double input) Activity { private final String
{ DEVICE_NAME="MyBTBee";
double dCoefficient1 = 0.0; private final String
double dCoefficient2 = 0.0; DEVICE_ADDRESS="20:13:10:15:33:66"
double dCoefficient3 = 0.0; ; private final UUID PORT_UUID =
double dCoefficient4 = 0.0; UUID.fromString("00001101-0000-
double dCoefficient5 = 0.0; 1000-8000-00805f9b34fb");
double dGain = 0.0; private BluetoothDevice
switch(SAMPLING_RATE) device;
{ private BluetoothSocket
case 300: socket;
private OutputStream
dCoefficient1 = 2.0; outputStream;
dCoefficient2 = -0.5698403540; private InputStream
dCoefficient3 = 2.5753677309; inputStream;
dCoefficient4 = -4.4374523505; Button startButton,
dCoefficient5 = 3.4318654424; sendButton,clearButton,stopButton;
dGain = 3.198027802e+01; TextView textView;
break; EditText editText;
case 3000: boolean deviceConnected=false;
default: Thread thread;
dCoefficient1 = 2.0; byte buffer[];
dCoefficient2 = -0.9438788347; int bufferPosition;
dCoefficient3 = 3.8299315572; boolean stopThread;
dCoefficient4 = -5.8282241502; @Override
dCoefficient5 = 3.9421714258; protected void onCreate(Bundle
dGain = 2.406930558e+03; savedInstanceState) {
break; } super.onCreate(savedInstanceState)
xv[0] = xv[1]; ;
xv[1] = xv[2]; setContentView(R.layout.activity_m
xv[2] = xv[3]; ain);
xv[3] = xv[4]; startButton = (Button)
xv[4] = (double)(input / dGain); findViewById(R.id.buttonStart);
yv[0] = yv[1]; sendButton = (Button)
yv[1] = yv[2]; findViewById(R.id.buttonSend);
yv[2] = yv[3]; clearButton = (Button)
yv[3] = yv[4]; findViewById(R.id.buttonClear);
yv[4] = (double)((xv[0] + xv[4]) - stopButton = (Button)
(dCoefficient1 * xv[2]) + findViewById(R.id.buttonStop);
(dCoefficient2 * yv[0]) + editText = (EditText)
(dCoefficient3 * yv[1]) + findViewById(R.id.editText);
(dCoefficient4 * yv[2]) + textView = (TextView)
(dCoefficient5 * yv[3])); findViewById(R.id.textView);
return (yv[4]); setUiEnabled(false);
}
}

68
public void setUiEnabled(boolean public boolean BTconnect()
bool) {
startButton.setEnabled(!bool); boolean connected=true;
sendButton.setEnabled(bool); try {
stopButton.setEnabled(bool); socket =
textView.setEnabled(bool); device.createRfcommSocketToService
} Record(PORT_UUID);
public boolean BTinit() socket.connect();
{ boolean found=false; } catch (IOException e) {
BluetoothAdapter e.printStackTrace();
bluetoothAdapter=BluetoothAdapter.get connected=false;
DefaultAdapter(); }
if (bluetoothAdapter == null) if(connected)
{ {
Toast.makeText(getApplicationContext( try {
),"Device doesnt Support
Bluetooth",Toast.LENGTH_SHORT).show() outputStream=socket.getOutputStrea
; m();
} } catch (IOException
if(!bluetoothAdapter.isEnabled()) e) {
{
Intent enableAdapter = e.printStackTrace();
new }
Intent(BluetoothAdapter.ACTION_REQUES try {
T_ENABLE);
startActivityForResult(enableAdapter, inputStream=socket.getInputStream(
0); );
try { } catch (IOException
Thread.sleep(1000); e) {
} catch
(InterruptedException e) { e.printStackTrace();
e.printStackTrace(); } }
}
} return connected; }
Set<BluetoothDevice> public void onClickStart(View
bondedDevices = view) {
bluetoothAdapter.getBondedDevices(); if(BTinit())
if(bondedDevices.isEmpty()) {
{ if(BTconnect())
Toast.makeText(getApplicationContext( {
),"Please Pair the Device
first",Toast.LENGTH_SHORT).show(); setUiEnabled(true);
}
else deviceConnected=true;
{
for (BluetoothDevice beginListenForData();
iterator : bondedDevices)
{ textView.append("\nConnection
Opened!\n");
if(iterator.getAddress().equals(DEVIC
E_ADDRESS)) }
{
device=iterator; }
found=true; }
break;
}
}
}
return found;
}

69
void beginListenForData() public void onClickSend(View
{ view) {
final Handler handler = new String string =
Handler(); editText.getText().toString();
stopThread = false; string.concat("\n");
buffer = new byte[1024]; try {
Thread thread = new
Thread(new Runnable() outputStream.write(string.getBytes
{ ());
public void run() } catch (IOException e) {
{ e.printStackTrace();
}
while(!Thread.currentThread().isInter textView.append("\nSent
rupted() && !stopThread) Data:"+string+"\n");
{
try }
{
int byteCount = public void onClickStop(View
inputStream.available(); view) throws IOException {
if(byteCount > 0) stopThread = true;
{ byte[] rawBytes = new outputStream.close();
byte[byteCount]; inputStream.close();
socket.close();
inputStream.read(rawBytes); setUiEnabled(false);
final String string=new deviceConnected=false;
String(rawBytes,"UTF-8");
handler.post(new Runnable() { textView.append("\nConnection
public void run() Closed!\n");
{textView.append(string); } }
});
} } public void onClickClear(View
catch (IOException ex) view) {
{ textView.setText("");
stopThread = true; }
} }
}
}
});

thread.start();
}

70
Data Acquisition: public void
trainningClear(int
_MetalcommandAction) {
public class EngineConnector {
MentalCommandDetection.IEE_
public EngineInterface delegate;
MentalCommandSetTrainingAction(use
public static void setContext
rId, _MetalcommandAction);
(Context context) {
if
EngineConnector.context =
(MentalCommandDetection.IEE_Mental
context; }
CommandSetTrainingControl(userId,
public static EngineConnector
shareInstance()
IEE_MentalCommandTrainingCo
{if (engineConnectInstance == null) {
ntrol_t.MC_ERASE.getType()) ==
IEdkErrorCode.EDK_OK
engineConnectInstance = new
.ToInt()) {
EngineConnector();
} } public boolean
}
startTrainingMetalcommand(Boolean
return
isTrain,
engineConnectInstance;
IEE_MentalCommandAction_t
} public EngineConnector()
MetaCommandAction) {
{
if (!isTrain) {
connectEngine();
if
}
(MentalCommandDetection.IEE_Mental
CommandSetTrainingAction(userId,
private void connectEngine() {
MetaCommandAction.ToInt())
IEdk.IEE_EngineConnect(EngineCo
== IEdkErrorCode.EDK_OK.ToInt())
nnector.context,"");
{ if
timer = new Timer();
(MentalCommandDetection.IEE_Mental
timerTask();
CommandSetTrainingControl(userId,
timer.schedule(timerTask, 0,
10);
IEE_MentalCommandTrainingCo
}public void
ntrol_t.MC_START.getType()) ==
enableMentalcommandActions(IEE_Mental
IEdkErrorCode.EDK_OK
CommandAction_t _MetalcommandAction)
{ long MetaCommandActions;
.ToInt()) {
long[] activeAction =
return true;
MentalCommandDetection.IEE_MentalComm
}
andGetActiveActions(userId);
} } else {
if (activeAction[0] ==
if
IEdkErrorCode.EDK_OK.ToInt()) {
(MentalCommandDetection.IEE_Mental
long y =
CommandSetTrainingControl(userId,
activeAction[1] & (long)
_MetalcommandAction.ToInt();
IEE_MentalCommandTrainingCo
if (y == 0) {
ntrol_t.MC_RESET.getType()) ==
MetaCommandActions = activeAction[1]|
IEdkErrorCode.EDK_OK
((long) _MetalcommandAction.ToInt());
.ToInt()) {
MentalCommandDetection.IEE_MentalComm
return false;
andSetActiveActions(userId,
}
MetaCommandActions);
} return false;
}}}
}
public boolean checkTrained(int
action) {long[] result =
MentalCommandDetection.IEE_MentalComm
andGetTrainedSignatureActions(userId)
;
if (result[0] ==
IEdkErrorCode.EDK_OK.ToInt())
{long y = result[1] & action;
return (y == action);
} return false; }

71
userId=-1;

hander.sendEmptyMessage(HAN
DLER_USER_REMOVE);
break;
case
public void setTrainControl(int type) TYPE_EMOSTATE_UPDATE:
{ if if
(MentalCommandDetection.IEE_MentalCom (!isConnected)
mandSetTrainingControl(userId, type) break;
== IEdkErrorCode.EDK_OK
.ToInt()) { } IEdk.IEE_EmoEngineEventGetE
} public void timerTask() moState();
{ if (timerTask != null)
return; hander.sendMessage(hander
timerTask = new TimerTask() {
@Override
public void run() { .obtainMessage(HANDLER_ACTI
// TODO ON_CURRENT));
Auto-generated method stub break;
/*Connect device case
with Insight headset*/ TYPE_METACOMMAND_EVENT:
int int
numberDevice=IEdk.IEE_GetInsightDevic type =
eCount(); MentalCommandDetection.IEE_MentalC
if(numberDevice!=0) ommandEventGetType();
{ if
if(!isConnected) (type ==
IEE_MentalCommandEvent_t.IEE_Menta
IEdk.IEE_ConnectInsightDevice(0 lCommandTrainingStarted
); }
int .getType()) {
numberDevice=IEdk.IEE_GetEpocPlusDevi
ceCount(); Log.e("MentalCommand",
if(numberDevice!=0) { "training started");
if(!isConnected)
hander.sendEmptyMessage(HAN
IEdk.IEE_ConnectEpocPlusDevice( DLER_TRAIN_STARTED);
0, false); } } else
state = if (type ==
IEdk.IEE_EngineGetNextEvent(); IEE_MentalCommandEvent_t.IEE_Menta
if (state == lCommandTrainingSucceeded
IEdkErrorCode.EDK_OK.ToInt()) {
int .getType()) {
eventType =
IEdk.IEE_EmoEngineEventGetType(); Log.e("MentalCommand",
switch "training Succeeded");
(eventType) {
case TYPE_USER_ADD: hander.sendEmptyMessage(HAN
DLER_TRAIN_SUCCEED);
Log.e("connect", "User Added"); } else
if (type ==
isConnected = true; IEE_MentalCommandEvent_t.IEE_Menta
userId = lCommandTrainingCompleted
IEdk.IEE_EmoEngineEventGetUserId();
.getType()) {
hander.sendEmptyMessage(HANDLER
_USER_ADD); Log.e("MentalCommand",
break; "training Completed");
case TYPE_USER_REMOVE
Log.e("disconnect", "User Removed"); hander.sendEmptyMessage(HAN
DLER_TRAIN_COMPLETED);
isConnected = false;
72
}//////////////////////////
case HANDLER_USER_REMOVE:
if (delegate
!= null)
delegate.userRemoved();
break;
case
else if (type == HANDLER_ACTION_CURRENT:
IEE_MentalCommandEvent_t.IEE_MentalCo if (delegate !=
mmandTrainingDataErased null)
delegate.currentAction(IEmo
.getType()) { StateDLL

Log.e("MentalCommand", .IS_MentalCommandGetCurrent
"training erased"); Action(), IEmoStateDLL

hander.sendEmptyMessage(HANDLER .IS_MentalCommandGetCurrent
_TRAIN_ERASED); ActionPower());
} else if (type break;
== case
IEE_MentalCommandEvent_t.IEE_MentalCo HANDLER_TRAIN_STARTED:
mmandTrainingFailed if (delegate !=
null)
.getType()) { delegate.trainStarted();
break;
Log.e("MentalCommand", case
"training failed"); HANDLER_TRAIN_SUCCEED:
if (delegate !=
hander.sendEmptyMessage(HANDLER null)
_TRAIN_FAILED); delegate.trainSucceed();
} else if (type break;
== case HANDLER_TRAIN_FAILED:
IEE_MentalCommandEvent_t.IEE_MentalCo if(delegate != null)
mmandTrainingRejected delegate.trainFailed();
break; case
.getType()) { HANDLER_TRAIN_COMPLETED:
if (delegate !=
Log.e("MentalCommand", null)
"training rejected"); delegate.trainCompleted();
break;
hander.sendEmptyMessage(HANDLER case
_TRAIN_REJECTED); HANDLER_TRAIN_ERASED:
} else if (type if (delegate !=
== null)
IEE_MentalCommandEvent_t.IEE_MentalCo delegate.trainErased();
mmandTrainingReset break;
case
.getType()) { HANDLER_TRAIN_REJECTED:
if (delegate !=
Log.e("MentalCommand", null)
"training Reset"); delegate.trainRejected();
break;
hander.sendEmptyMessage(HANDLER case
_TRAINED_RESET); HANDLER_TRAINED_RESET:
} if (delegate !=
break; null)
delegate.trainReset();
default: break;
break; default:
} break; }
} } };}
} }; }
Handler hander = new Handler()
{ public void
73
handleMessage(Message msg) {
switch (msg.what) {
case HANDLER_USER_ADD:
if (delegate !=
null)
delegate.userAdd(userId);
break;

74

You might also like