CONVOLUTIONAL NEURAL NETWORKS FOR CLASSIFICATION TEEG AND EEG

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 62

University of Rhode Island

DigitalCommons@URI

Open Access Master's Theses

2023

CONVOLUTIONAL NEURAL NETWORKS FOR CLASSIFICATION


TEEG AND EEG
Thao Nguyen Ngoc Pham
University of Rhode Island, nguyen_pham@uri.edu

Follow this and additional works at: https://digitalcommons.uri.edu/theses

Recommended Citation
Pham, Thao Nguyen Ngoc, "CONVOLUTIONAL NEURAL NETWORKS FOR CLASSIFICATION TEEG AND
EEG" (2023). Open Access Master's Theses. Paper 2340.
https://digitalcommons.uri.edu/theses/2340

This Thesis is brought to you for free and open access by DigitalCommons@URI. It has been accepted for inclusion
in Open Access Master's Theses by an authorized administrator of DigitalCommons@URI. For more information,
please contact digitalcommons-group@uri.edu.
CONVOLUTIONAL NEURAL NETWORKS FOR

CLASSIFICATION TEEG AND EEG

BY

THAO NGUYEN NGOC PHAM

A DISSERTATION SUBMITTED IN PARTIAL FULFILLMENT OF THE

REQUIREMENTS FOR THE DEGREE OF

MASTER OF SCIENCE

IN

ELECTRICAL ENGINEERING

UNIVERSITY OF RHODE ISLAND


MASTER OF SCIENCE THESIS

OF
THAO NGUYEN NGOC PHAM

APPROVED:

Thesis Committee:

Major Professor Kaushallya Adhikari

Walter G. Besio

Vanessa Harwood

Brenton DeBoef
DEAN OF THE GRADUATE SCHOOL

UNIVERSITY OF RHODE ISLAND


2023
ABSTRACT

Electroencephalogram (EEG) is a commonly used non-invasive method

that acquires voltage of the brain during neural activities by using conventional

disc electrodes. However, there are some fundamental limitations associated

with EEG such as poor spatial resolution and signal-to-noise ratio (SNR),

inaccurate data due to overlapping information, and artifact contamination. Also,

the EEG software filtering solutions had the potential of distorting EEG data

making the results unreliable. The tripolar electroencephalogram (tEEG)

records brain signals by using the tripolar concentric ring electrodes (TCRE).

Based on the surface Laplacian algorithm, signals recorded from TCRE

overcome the limitations of EEG. tEEG has better SNR and spatial resolution,

the ability of acquiring high frequency activities (up to 425 Hz so far) and can

also perform muscle artifact rejection (electromyography (EMG) rejection). This

study aimed to determine if the tEEG gives better performance in language

mapping with the “language dominant hemisphere classification” and motor

cortex mapping with “finger movements classification”. Our analysis shows that

tEEG yields higher average classification accuracy when compared to EEG

using spectrum data even with simple signal processing procedures.


ACKNOWLEDGMENTS

Firstly, I would like to express my deepest gratitude to my thesis advisor,

Dr. Kaushallya Adhikari, for her invaluable guidance and support throughout my

pursuit of a degree. Dr. Adhikari's dedication and expertise are the main reasons

why I switched from the non-thesis option to the thesis one. She has always

been available to provide me with insightful feedback, which has helped me to

refine my research skills and develop a deeper understanding of the subject

matter. Secondly, I would like to extend my sincere thanks to Dr. Walter Besio,

he is one of my favorite professors since my undergraduate years at URI. He

generously provided me with access to his lab facilities and helped with

collecting high-quality data for this research project. Special thanks to my

parents, Truc Pham and Suong Nguyen, whose unwavering love, care, and

support have been a source of strength and motivation during my most

challenging times. I would also like to take this opportunity to express my

appreciation to my sister, Hieu Pham, who selflessly sacrificed her own dream

for me to be here. I owe all my success to my family, and I cannot thank them

enough for their unconditional love and support. Additionally, I would like to

acknowledge the staff at URI’s International Center and my friends, whose

support and assistance have made my transition to a new country and culture

much smoother and easier. Lastly, I am thankful to Dr. Kaushallya Adhikari, Dr.

Walter Besio, Dr. Vanessa Harwood, and Dr. Alisa Baron for being part of my

thesis committee.

iii
PREFACE

The thesis is written in manuscript format and contains 2 manuscripts. The

first manuscript was accepted by 2022 44th Annual International Conference of

the IEEE Engineering in Medicine and Biology Society and is now published in

IEEE Xplore. This manuscript focused on proving the feasibility of tEEG in

classification of language dominant hemisphere in language mapping

experiment. The second manuscript is submitted to IEEE World AI IoT

Congress 2023. This manuscript focuses on proving the feasibility of tEEG in

classification of finger movement in the motor imagery finger movement

experiment.

iv
TABLE OF CONTENTS

ABSTRACT ......................................................................................................ii

ACKNOWLEDGMENTS .................................................................................. iii

PREFACE ........................................................................................................iv

TABLE OF CONTENTS................................................................................... v

LIST OF TABLES ........................................................................................... vii

LIST OF FIGURES......................................................................................... viii

MANUSCRIPT 1 .............................................................................................. 1

Language Mapping using tEEG and EEG Data with Convolutional ......................................... 1
Neural Networks ..................................................................................................................... 1
1.1 Abstract............................................................................................................................. 2
1.2 Introduction ...................................................................................................................... 2
1.3 Methodology .................................................................................................................... 6
1.3.1 Data Acquisition ......................................................................................................... 7
2.3.2 Signal Processing........................................................................................................ 8
Result ................................................................................................................................ 12
1.4 Conclusion....................................................................................................................... 13
ACKNOWLEDGEMENT........................................................................................................... 13
List of references .................................................................................................................. 13

MANUSCRIPT 2 ............................................................................................ 18

v
Deep Learning-Based Classification of Finger Movements using tEEG and EEG Signals ...... 18
2.1 Abstract........................................................................................................................... 19
2.2 Introduction .................................................................................................................... 19
2.3 Methodology .................................................................................................................. 22
2.3.1 tEEG Innovation ....................................................................................................... 22
2.3.2 Data Acquisition ....................................................................................................... 24
2.3.3 Signal Processing...................................................................................................... 27
2.3.4 Deep Learning .......................................................................................................... 32
2.4 Result .............................................................................................................................. 37
2.5 Conclusion....................................................................................................................... 39
List of Preferences ................................................................................................................ 39

vi
LIST OF TABLES

Table 1.1. CNN Results.................................................................................. 12

Table 2.1 Training Accuracy Result for MI Index Finger................................. 39

vii
LIST OF FIGURES

Figure 1.1. (Left) disc electrode, (Right) TCRE ................................................ 4

Figure 1.2. tEEG Interface................................................................................ 5

Figure 1.3. TCRE location (4 sets of 5 arrays) ................................................. 8

Figure 2.1a. Left: Disc Electrode. Right: TCRE ............................................. 24

Figure 2.1b. tEEG Interface............................................................................ 24

Figure 2.2. Electrode Montage ....................................................................... 25

Figure 2.3. Timeline for Trials in MI Experiment ............................................. 27

Figure 2.4. Time-domain Channel 1 Data Corresponding to Physical Movement

....................................................................................................................... 29

Figure 2.5. Time-domain Channel 1 Data Corresponding to Imaginary

Movement ...................................................................................................... 30

Figure 2.6. Frequency-domain Channel 1 Data Corresponding to Physical

Movement ...................................................................................................... 31

Figure 2.7. Frequency-domain Channel 1 Data Corresponding to Imaginary

Movement ...................................................................................................... 32

Figure 2.8 CNN Architecture .......................................................................... 33

Figure 2.9. Training Classification Accuracy of Channels C3 and FC2 .......... 38

viii
MANUSCRIPT 1

Language Mapping using tEEG and EEG Data with Convolutional

Neural Networks

Kaushallya Adhikari1, Member, IEEE, Thao Pham1, Joanne Hall2, Alexander

Rotenberg2 and Walter G. Besio1, Senior Member, IEEE

*Research supported by Northeast Big Data Innovation Hub 2021

1 University of Rhode Island, Kingston, RI, USA

2 Boston Children’s Hospital, MA, USA

Manuscript is published in IEEE Xplore for 2022 44th Annual International

Conference of the IEEE Engineer in Medical and Biology Society (EMBC).

1
1.1 Abstract

Tripolar electroencephalography (tEEG) has been found to have

significantly better signal-to-noise ratio, spatial resolution, mutual information,

and high-frequencies compared to EEG. This paper analyzes the tEEG signals

acquired simultaneously with the EEG signals and compares their ability to map

language to left and right hemispheres using convolutional neural networks

(CNNs). The results show that while the time-domain features of tEEG and EEG

signals lead to comparable functional mapping, the frequency domain features

are significantly different. The left and right hemisphere classification

performances using tEEG are equivalent in time and frequency domains.

However, frequency domain classification for EEG results are less accurate.

Clinical Relevance - This technique could quickly, and noninvasively, guide

clinicians about language dominance when preparing for resective surgery.

1.2 Introduction

Many patients, children in particular, cannot tolerate awake craniotomy and

intraoperative language mapping, and this need drives the search for optimal

preoperative noninvasive functional language mapping. Mapping of eloquent

brain areas is critical for preservation of function after resective brain surgery,

as is necessary for many patients with epilepsy, brain tumors, brain vascular

malformations, and related disorders. Among the essential considerations in

2
formulating a neurosurgical plan is establishing the language hemispheric

dominance, particularly if the planned resection is in the left hemisphere which

is often, but not universally, dominant for language [1], [2]. We have been

developing tripolar electroencephalography (tEEG) with significantly higher

spatial resolution, muscle artifact rejection, and high-frequency features,

compared to conventional EEG. To prove the feasibility of tEEG for functional

language mapping, our goal is to test: whether correct hemispheric dominance

can be measured by scalp tEEG, noninvasively and passively. To accomplish

this, we developed a high-spatial resolution tEEG system and implemented a

deep learning methodology, convolutional neural networks (CNNs), to

incorporate the expanded signal features provided by tEEG. In the future, after

proving feasibility on healthy participants, we will deploy the full head high

density scalp tEEG mapping technology to localize function within a quadrant,

as would be needed for surgical planning when there is a possibility that the

planned resection overlaps with a functional language area [3].

Noninvasive mapping of cortical especially for the vulnerable patient

population who cannot tolerate awake craniotomy, fMRI or TMS, is an important

unmet need [4]. A quick, high fidelity, and noninvasive mapping technique is

desired to address this need for preoperative functional mapping of cortical

function. An improved EEG maybe a reasonable solution. Also, the signal

processing technique used to decode the measurements needs to be fast,

reliable, capable of handling large datasets, and automated. These

3
requirements lead us to deep learning techniques. The use of CNNs may make

the signal processing more efficient, such as applying raw signals, lessening

the time to results [5] [6].

Figure 1.1. (Left) disc electrode, (Right) TCRE

EEG has the advantage of high temporal resolution, but its low spatial

resolution and artifact contamination hinder its effectiveness for the purpose of

functional cortical localization. Software-based filtering solutions exist for

“cleaning” the signal [7], but these require digital transformation and processing

of data and may not faithfully model the biological signal. Therefore, researchers

and clinicians cannot be certain that the processed data accurately reflect the

physiology.

tEEG reduces artifacts and enhances the interpretability of EEG through

two inventions: (1) a transformative electrode configuration - the tripolar

concentric ring electrode (TCRE); and (2) a proprietary tEEG Interface.

Conventional EEG uses the disc electrode which has a single element (Fig. 1,

left). The TCRE sensor consists of three concentric rings: outer, middle and

center (Fig. 1, right). Each TCRE provides two output signals: (1) a high fidelity,
4
differential Laplacian signal by combining the differences between outer ring

and middle ring; and (2) a conventional EEG signal from the outer ring alone [8].

The distinctive TCRE design enables high-fidelity EEG recording that is an

appreciable improvement over conventional disc electrodes. When taking

bipolar differences from the closely spaced elements of the TCRE sensors,

noise that is common to each element is automatically attenuated.

Figure 1.2. tEEG Interface

The TCRE is directionally independent to global sources and highly focused

on local activity due to its concentric configuration, which attenuates distant

radial signals and artifacts by 100 dB one radius from the electrode [9] – thus

common artifacts such as muscle and ECG are attenuated in the recording [10].

The tEEG Interface (Fig. 2) configures the electrode elements, routes, and

preamplifies the signals. The differentials from the TCRE rings are on the order

of a few hundred nanovolts, one-to-two orders of magnitude smaller than EEG.

To record nanovolt range signals is technically challenging. A complete tEEG

system consists of the TCRE, tEEG Interface, and amplifier-digitizer.

5
1.3 Methodology

Although the traditional signal processing algorithms have been able to

perform functional mapping using ECoG signals, CNNs have been

revolutionizing processing and decoding of brain signals. Deep learning

techniques are more effective in uncovering hidden patterns and extracting

features from large datasets. They have been used with EEG for several lines

of study including: cognitive load classification [11], P300 detection with an

application to brain-computer interfaces [12], driver's cognitive performance [13],

recognition of rhythm stimuli [14], prediction of memory performance

(remembered or forgotten) [15], classification of motor imagery signals [16],

simultaneous capturing of spectral, temporal, and spatial information for

automatic seizure detection [17], automated screening of depression [18],

seizure prediction [19], classification of one-trial EEG in a rapid serial visual

presentation (RSVP) task [20], and seizure detection in an ambulatory setting

[21].

A CNN is a multilayer perceptron that consists of an input layer, an output

layer, and many hidden layers between the input and output layers. All weights

in all layers are learned through training and a CNN learns to extract its own

features automatically [22]. The hidden layers perform three important

operations: convolution, activation, and pooling. The three operations are

repeated over various layers, where each layer learns to identify distinct

6
features from the data. After feature learning, a CNN architecture performs

classification.

1.3.1 Data Acquisition

We had healthy right-handed and left-handed participants perform

overt/covert picture naming for functional cortical mapping, similar to the

methods in [4] [23]. First, we recorded 5 min of baseline. Then participants were

requested to name, randomly, aloud and silently, a series of line drawings of

common inanimate and animate objects displayed on an electronic monitor [4].

The pictures were shown for 3,500 ms each, with 2,500 ms inter stimulus

interval, repeatedly for 10 min (100 stimulations, 50 each overt and covert) [4].

Covert naming, without vocalization, was used to delineate between cognitive

and motor aspects of expressive language. Four sets of five 10 mm TCRE

arrays were placed on the participant's scalp at left anterior, right anterior, left

temporoposterior, and right temporoposterior, as illustrated in Fig. 3. The

anterior arrays are approximately over Broca's area associated with verbal

expression. The temporoposterior arrays are approximately over the Wernicke's

area associated with verbal understanding. Further, for a comparison study

between tEEG and traditional EEG, we recorded from the outer ring of the

TCREs concurrently. We randomly performed the tEEG or EEG experiments

and analyzed the signals for localization and lateralization using both tEEG and

EEG. There were 20 tEEG and 20 eEEG channels and two channels used to

acquire eye movement and blink artifact information. The acquired data were
7
amplified, digitized at 1000 Hz, and hardware band-pass filtered at 1–500 Hz.

Data acquisition took approximately 10 minutes. Audio was also recorded for

1 - Broca’s area
2 - Werrnicke’s area

specific onset and offset times of overt speech.

Figure 1.3. TCRE location (4 sets of 5 arrays)

2.3.2 Signal Processing

The primary data analysis technique we employed was CNN. We fed the

raw tEEG data into different CNN algorithms. We used maximum pooling layers

and ReLU (Rectified Exponential Linear Unity) for activation layers. We used

three different input formulations: (a) raw tEEG time-series, where the 2-

dimensional (2D) input data array have different channels along one dimension

and different time instances along the other dimension; (b) energy in time-series,

where the 2-dimensional (2D) input data array have different channels along

one dimension and different energy values along the other dimension; and (c)

8
spectrograms. All input formulations were stored into image format and two-

dimensional convolution operation was implemented using the Deep Learning

Toolbox in MATLAB.

The first operation performed on the input array (spectrogram or 2D tEEG

data) is convolution with K different filters of size F − by − F [22]. The number of

feature maps or the depth of the resulting convolution and activation layers is

equal to K. The second operation is pooling or subsampling, which reduces the

width and length of the feature map but retains the depth. The third operation

performed is a second convolution, which is then followed by a second

activation and a second pooling. With the successive convolution, activation,

and pooling, a bipyramidal effect is obtained because at each convolutional or

pooling layer, the depth is increased while the length and width are reduced,

compared with the corresponding previous layer [22]. There were two possible

output classes corresponding to the left and right hemispheres.

The spectrogram input to a CNN was of dimensions W × W = 128 × 128

[22]. For CNNs with raw tEEG data input, the input had size W × L = 20 × 128

corresponding to 20 tEEG channels and 128 data samples. The same

configuration was implemented for the EEG. The filter dimensions for the first

convolutional layers was F × F [22]. We considered different values of

F: 3, 10, 15, 20. The stride with which we slide the filter was set to S = 1. The
𝑊−𝐹+2𝑃
width and height of the output of the convolutional layer was + 1, where
𝑠

9
P is the number of zero-padding row or column on one side of the data matrix.

For a stride of 1, the convolutional layer output always has dimensions equal to

W − F + 2P + 1. In order to have the input and output of the convolution layer to

have an equal width, W , and an equal height, W , we set zero-padding

parameter to P = (F − 1)/2. This obviated the need to compute the height and

width of the output of the convolutional layer and ensured that the dimensions

are valid for the next layer. The task of downsampling was delegated completely

to pooling layers. The output of the first convolutional layer was of size

W × W × K. This output was sent through the activation function, which added

non-linearities to the architecture. The activation function output was the input

to the first pooling layer. Its purpose was to reduce the spatial size. It worked

independently on each depth slice of dimensions W × W. We use the pooling

layer filter of size 2 × 2 with a stride of 2 along both width and height dimensions

in all pooling layers. This pooling operation effectively discarded 75% of its input

for every depth slice. The pooling layer did not change the depth dimension.

For all architectures that we considered, after the final pooling layer, we

applied flattening as this is a necessary step before classification [22]. The

flattened neurons were sent to a fully connected layer. Every neuron of the fully

connected layer was connected to all other neurons at its input. The fully

connected layer performed the first part of classification and yielded an output

with C elements, where C represents the number of classes. This vector was fed

to a softmax layer. The softmax function processed the input vector and mapped
10
the vector into a probability distribution. Each element of the output of the

softmax layer was in the range (0,1) and corresponded to the probability of a

class. If the ith input of the input vector is given by 𝑦𝑖 , then the output of the

softmax layer for that element is 𝑠𝑜𝑓𝑡𝑚𝑎𝑥(𝑦𝑖 ) = 𝑒 𝑦𝑖 / ∑𝐶𝑖=1 𝑒 𝑦𝑖 . Hence, the ith

element of the output of the softmax layer is the probability of the ith class. The

class with the highest probability is the overall output of the network.

We labeled the right-handed patients data as left-hemisphere dominant and

the left-handed patients data as right-hemisphere dominant. During the training

phase with N labeled datasets (𝑥𝑛 , 𝑣𝑛 ), 𝑛 = 1,2, … , 𝑁, we estimated the values

of the free parameters, which are filter weights and biases and are collectively

represented by the variable θ. We initialized all free parameters with random

values, computed the output of each layer, and found the output probabilities

for each class. We trained the network using stochastic gradient descent with

momentum (SGDM) [25]. Due to the compositional form of the neural network

model, the gradient descent was easily evaluated using the chain rule of

differentiation. The back propagation algorithm had the locality constraint, which

meant that the computation performed by each neuron in the network was

influenced only by the neurons that were physically connected to it. Such a

locality constraint is preferred because it allows a graceful degradation in

performance due to hardware errors and forms the foundation for a fault-tolerant

network. Also, the locality constraint permits the use of parallel architectures,

leading to efficient and fast computations [22].


11
Result

Table I summarizes the CNN results for a network with 5 layers and 20 × 20

filters. We grouped right-handed and left-handed patients' data into two

separate categories. We used randomly selected 80% of the data for training

and the remaining data for validation. We repeated the process 1000 times and

computed the average of the results. Using raw time-series data, tEEG was able

to classify the data into left and right hemispheres correctly 81% of the time and

the variance over 1000 trials was found to be 0.01. When data energy and

spectrograms were used as the inputs, the classification accuracy improved to

93.8% and 96.5%, respectively. A similar analysis for EEG showed an accuracy

of 90.2%, 89.3%, and 57.9%, respectively, for raw signal values, energy values,

and spectrograms. While classification using raw signal and energy yielded high

accuracy for both tEEG and EEG, the spectrograms input resulted worse

accuracy in EEG. Our results indicate that while tEEG all input formulations are

approximately equivalent for EEG raw signal and energy are more favorable.

Table 1.1. CNN Results

Accuracy, Variance
Data Type
Raw signal Energy Spectrogram

tEEG 81, 0.01 93.8, 0.001 96.5, 0.012

EEG 90.2, 0.01 89.3, 0.123 57.9, 0.07

12
1.4 Conclusion

Both tEEG and EEG were able to classify the language dominant

hemisphere. The tEEG had the highest classification. We are not certain which

hemisphere is dominant in these participants. In general, the right-handed

people exhibit left hemisphere language dominance and vice versa for left-

handed people.

ACKNOWLEDGEMENT

The data collection involving human subjects was approved by the

Institutional Review Board with assistance from Boston Children's Hospital.

List of references

[1] J. R. Binder, S. J. Swanson, T. A. Hammeke and D. Sabsevitz, "A

comparison of five fMRI protocols for mapping speech comprehension

systems," Epilepsia, vol. 49, no. 12, pp. 1980-1997, 2008.

[2] S. Knecht, B. Dräger, M. Deppe, L. Bobe, H. Lohmann, A. Flöel, E. B.

Ringelstein and H. Henningsen, "Handedness and hemispheric language

dominance in healthy humans," Brain, vol. 123, no. 12, pp. 2512-2518, 2000.

[3] H. O. Lüders, I. Najm, D. Nair, P. Widdess-Walsh and W. Bingman, "The

epileptogenic zone: general principles.," Epileptic disorders : international

epilepsy journal with videotape, vol. 8 Suppl 2, pp. S1-9, 8 2006.

13
[4] R. Arya and J. A. Wilson, "Real-time mapping of natural speech in

children with drug-resistant epilepsy," in Brain-computer interface research: a

state-of-the-art summary, 1, Ed., New York: Springer International Publishing,

2015, pp. 9-17.

[5] Z. Wang, W. Yan and T. Oates, "Time series classification from scratch

with deep neural networks: A strong baseline.," International Joint Conference

on Neural Networks, pp. 1578-1585.

[6] B. Zhao, H. Lu, S. Chen, J. Liu and D. Wu, "Convolutional neural

networks for time series classification," Journal of Systems Engineering and

Electronics, vol. 28, no. 1, pp. 162-169, 2017.

[7] I. Daly, N. Nicolaou, s. J. Nasuto and K. Warwick, "Automated artifact

removal from the electroencephalogram: a comparative study," Clin EEG

Neurosci., vol. 44, pp. 291-306, 2013.

[8] O. Makeyev, Y. Boudria, Z. Zhu, T. Lennon and W. Besio, "Emulating

conventional disc electrode with the outer ring of the tripolar concentric ring

electrode in phantom and human electroencephalogram data," in IEEE Signal

Processing in Medicine and Biology Symposium, NYC, 2013.

[9] W. Besio, K. Koka, R. Aakula and W. Dai, "Tri-polar concentric electrode

development for high resolution eeg laplacian electroencephalography using tri-

polar concentric ring electrodes," IEEE Trans BME, vol. 53, no. 5, pp. 926-933,

2006.

14
[10] H. Cao, W. Besio, S. Jones and A. Medvedev, "Improved separability of

dipole sources by tripolar versus conventional disk electrodes: a modeling study

using independent component analysis," in Proceedings of the 31st IEEE EMBS

Annual International Conference, Minneapolis, MN, 2009.

[11] P. Bashivan, I. Rish, M. Yeasin and N. Codella, "Learning

Representations from EEG with Deep Recurrent-Convolutional Neural

Networks," arXiv, 2016.

[12] H. Cecotti and A. Graser, "Convolutional neural networks for P300

detection with application to brain-computer interfaces," IEEE Trans Pattern

Anal Mach Intell, vol. 33, pp. 433-445, 2011.

[13] M. Hajinoroozi, Z. Mao, T.-P. Jung, C.-T. Lin and Y. Huang, "EEG based

prediction of driver’s cognitive performance by deep convolutional neural

network," Signal Process Image Communication, vol. 47, pp. 549-555, 2016.

[14] S. Stober, D. J. Cameron and J. A. Grahn, "Using Convolutional Neural

Networks to Recognize Rhythm Stimuli from Electroencephalography

Recordings," in International Conference on Neural Information Processing

Systems, 2014.

[15] X. Sun, C. Qian, Z. Chen, Z. Wu, B. Luo and G. Pan, "Remembered or

forgotten?–An EEG-based computational prediction approach," in PLoS ONE

11:e0167497., 2016.

15
[16] Y. R. Tabar and U. Halici, "A novel deep learning approach for

classification of EEG motor imagery signals," Neural Eng, vol. 14.

[17] P. Thodoroff, J. Pineau and A. Lim, "Learning Robust Features using

Deep Learning for Automatic Seizure Detection," in JMLR Workshop and

Conference Proceedings, volume 56., 2016.

[18] U. R. Acharya, S. L. Oh, Y. Hagiwara, J. H. Tan, H. Adeli and S. D. P,

"Automated EEG-based screening of depression using deep convolutional

neural network," Computer Methods and Programs in Biomedicine, vol. 161, pp.

103-113, 2918.

[19] J. Liang, R. Lu, C. Zhang and F. Wang, "Predicting Seizures from

Electroencephalography Recordings: A Knowledge Transfer Strategy," 2016.

[20] R. Manor and A. B. Geva, "Convolutional neural network for multi-

category rapid serial visual presentation BCI," Front Comput Neurosci, vol. 9,

2015.

[21] A. Page, C. Shea and T. Mohsenin, "Wearable seizure detection using

convolutional neural networks with transfer learning," in IEEE International

Symposium on Circuits and Systems (ISCAS), 2016.

[22] S. Haykin, Neural Networks and Learning Machines, Pearson.

[23] M. Korostenskaja, A. J. Wilson, D. F. Rose, P. Brunner, G. Schalk and et

al, "Real-time functional mapping with electrocorticography in pediatric epilepsy:

16
comparison with fMRI and ESM findings," Clin EEG Neurosci, vol. 45, no. 3, pp.

205-2011, 2014.

[24] K. J. Miller, T. J. Abel, A. O. Hebb and J. g. Ojemann, "Rapid online

language mapping with electrocorticography," J Neurosurg Pediatr, vol. 7, no.

5, pp. 482-490, 2011.

[25] T. Hastie, R. Tibshirani and J. Friedman, The Elements of Statistical

Learning: Data Mining, Inference, and Prediction, Springer, 2008.

17
MANUSCRIPT 2

Deep Learning-Based Classification of Finger Movements using tEEG

and EEG Signals

Thao Pham1, Kaushallya Adhikari1, and Walter G. Besio1

1 Electrical, Computer and Biomedical Engineering. University of Rhode

Island, Kingston, RI, USA

Manuscript is currently submitted to IEEE World AI IoT Congress 2023

18
2.1 Abstract

Mapping electroencephalography (EEG) signals obtained during hand and

finger movements to left or right hands has applications in many fields including

motor imagery brain-computer interface (MI-BCI) systems. MI-BCI systems are

innovative technology to assist patients with severe motor impairments,

however, novice users can face the “BCI-illiteracy” issue and fail to control the

system. A step towards assisting such patients is to correctly classify their EEG

signals collected while the patients imagine or perform hand/finger movements.

This work considers using the tripolar electroencephalography (tEEG) instead

of the traditional EEG due to higher signal-to-noise (SNR), better spatial

resolution, better artifact noise rejection feature, and ability to capture high-

frequency features of tEEG, compared to conventional EEG. In this paper, we

compare the performance of tEEG and EEG signals in classifying motor

imagery (MI) of right and left index fingers by using a deep learning algorithm:

convolutional neural network (CNN). The results show that in 5 out of 7 subjects,

CNN is able to perform classification with high accuracy, with the highest

accuracy being 96.4% for tEEG data. Keywords—EEG, t-EEG, Deep Learning,

CNN

2.2 Introduction

Motor imagery brain-computer interface (MI-BCI) is a promising

communication and control system to replace the interconnected feature

19
between the brain and external environment. The system captures the brain

activity patterns associated with imaginary movement and converts them into

commands to control assistive devices. The technology brings new prospects

which allow individuals with severe motor impairments (stroke, ALS, SCI, etc.)

to control the external devices by their brain signals. Researchers and scientists

have been extensively investigating to improve the performance of such

systems. With MI-BCI systems, scientists have successfully detected different

hand movements [1]–[3], different fingers movement of the same hand [4], [5],

wrist movements of different hands [6], and upper limb movements [7], [8].

Different bio-signals have been used as the input to control the MI-BCI systems

such as electrocorticogram (ECoG) [5], functional near-infrared spectroscopy

(fNIRS) [2], electromyography (EMG) [8], but the most commonly used input

type is EEG signals. EEG signals are prevalent as the input for the MI-BCI

systems because of EEG signals have high temporal resolution and relatively

low cost. However, the EEG signals have some drawbacks. Recorded EEG

signals are non-stationary and suffer from low SNR and poor spatial resolution

[9]. Also, EEG signals often suffer from artifact contamination which affects

functional cortical localization. There are software filtering solutions [10] that are

designed to improve the signal quality, but the outputs of these filters can

include distortion. Therefore, we propose using tripolar EEG (tEEG). tEEG

signals are known to have high SNR and spatial resolution. They are also able

to capture high frequency activities that are not observed in traditional EEG.

20
Also, tEEG signals have significantly higher ability to reject muscle artifact effect

[11]–[13].

MI-BCI systems traditionally use common spatial patterns (CSP) as primary

signal processing method to extract features and linear discriminant analysis

(LDA) or support vector machine (SVM) as primary machine learning algorithm

for classification [3], [14], [15]. However, it has been shown that fifteen to thirty

percent of MI-BCI users have the “BCI-illiteracy” issue, which means that they

cannot reach the acceptable threshold accuracy of 70%, even after extensive

training [1]. Thus, it is necessary to improve the accuracy to ensure that the MI-

BCI users can perform assigned tasks precisely. Artificial neural network based

algorithms and especially, convolutional neural networks (CNNs) have shown

tremendous ability to perform classification in biomedical fields such as breast

cancer diagnoses, tissue extraction, tumor detection and classification [16]. In

neuroscience, EEG and CNNs have been used to detect P300 with BCI [17],

detect abnormalities such as arrhythmia [18] and seizure [19]. In our previous

work [20], we demonstrated the ability of CNN in determining language

dominant hemisphere by using tEEG signal. Specifically with MI-BCI,

researchers have shown CNN’s ability in significantly increasing classification

accuracy by 15.32% to 18.38% compared with traditional machine learning

approaches and helping BCI-illiteracy users improve their performance [3], [21].

In this paper, we propose promising solutions to improve the training

accuracy with a focus on improving the input signal quality in the MI cognitive
21
tasks. To prove the feasibility of the tEEG in MI, this study aims to exploit the

advantages of tEEG signals in classifying MI of index fingers. To accomplish

this, we developed a high-spatial resolution tEEG system to recorded both EEG

and tEEG data, we implemented CNN, to incorporate and process the

expanded signal features provide by tEEG. In the future, after proving feasibility,

we would like to expand the system to full MI-BCI model, in which participants

can first train the BCI system and then perform the assistive applications.

The rest of the paper is organized as follows. Section II describes the

methodology, including tEEG innovation, data acquisition, and signal

processing. Section III presents our results, and Section IV provides

conclusions.

2.3 Methodology

To compare the performance between tEEG and EEG in MI classification,

data signals were collected from healthy subjects while they were performing

MI tasks. Thereon, both signal type recorded data went through the same signal

processing and fed into the CNN algorithm. This section described tEEG

innovation, data acquisition, signal processing procedures, and deep learning

CNN.

2.3.1 tEEG Innovation

Similar to the conventional EEG, tEEG measures and records the electrical

brain activity. However, a conventional EEG consists of a disc electrode as

22
depicted in Fig. 1a whereas tEEG consists of a tripolar concentric ring electrode

(TCRE) instead of the conventional disc electrode. The TCRE has three

concentric rings: center, middle, and outer. The TCRE technology is based on

the application of surface Laplacian algorithm. It includes three concentric rings

in a bullseye configuration as illustrated in Fig. 1a. Each ring in the TCRE

configuration operates as a single EEG electrode. One difference signal is

obtained by subtracting the center disc signal from the middle ring signal. A

second difference signal is obtained by subtracting the center disc signal from

the outer ring signal. The final tEEG signal is the differential Laplacian signal

that combines the two differences [22]. Because of the concentric structure, the

amplitudes of distant radial signals and artifacts are approximately equal.

Consequently, the amplitudes of radial signals and artifacts in tEEG can be

attenuated by 100 dB [23]. Therefore, artifacts result from sources such as

muscle artifacts and ECG are diminished in tEEG.

Another reason that tEEG outperforms EEG is the tEEG interface, which is

illustrated in Fig. 1b. The purpose of the tEEG interface is to configure the TCRE

rings, then direct and amplify the measured signals. The differential signals that

are received from the TCRE rings are less than 1 microVolt. Therefore, the

ability to record from the TCRE is a technical innovation. A complete tEEG

system includes both the TCRE leads and the tEEG interfaces.

In this work, we place EEG electrodes near tEEG electrodes so as to obtain

both EEG and tEEG signals for comparison.


23
a. b.

Figure 2.1a. Left: Disc Electrode. Right: TCRE

Figure 2.1b. tEEG Interface

2.3.2 Data Acquisition

We recruited 7 healthy subjects for tEEG and EEG data acquisition. There

were 4 female and 3 male participants. All participants were novices in BCI and

MI tasks. Two of the participants were left-handed and five participants were

right-handed. The signals were recorded at the Neural Rehabilitation

Engineering Laboratory at University of Rhode Island. Brain signals were

recorded using a Brain Amp amplifier (Brain Products, Germany), a tInterface

20 (CREmedical, USA), 8 TCRE leads (CREmedical) and 8 traditional EEG

electrodes. Electrodes were placed according to the 10-20 system, focusing on

primary motor cortex and medial premotor cortex as illustrated in Fig. 2. The

primary motor cortex includes the labels C3, C1, Cz, C2, and C4 in Fig. 2 while

the medial premotor cortex includes FC2, FCz, and FC1 in the figure. Before

placing electrodes on participants’ scalp, conductive paste was applied to

24
ensure that the impedance of the electrodes was under 10 kOhm for all subjects.

The sampling frequency was 2500 samples/second.

Figure 2.2. Electrode Montage

(Primary motor cortex in red and medial premotor cortex in blue)

During each recording session, subjects were shown a finger on a screen

at a specified time. They were requested to either physically move their

corresponding finger or imagine moving their corresponding finger. The subjects

were shown a total of 500 fingers on the screen. Each of the 10 fingers was

shown exactly 50 times. There were 250 physical movements and 250

imaginary movements. The order of the physical and imaginary movements was

randomized using the function randi in MATLAB. In the first phase of data

acquisition, 1 minute of subject’s baseline data were recorded while the subject

had both eyes closed. Then, 1 minute of baseline data were collected while the

subject had both eyes open. After baseline data recording, subjects were

25
directed to follow the instructions on a monitor. Four types of instructions were

shown to the subjects:

1) Physical/imaginary: This instruction indicated whether the subjects were

to make physical or imaginary movements for the following 10 finger

movements.

2) Get ready: This instruction showed a picture of all 10 fingers with a red

arrow pointing to only one of the 10 fingers. The subjects were instructed to get

ready to move their corresponding finger. The finger order was decided by

randomly shuffling a pool of 10 fingers using the randperm function in MATLAB.

3) Action: This instruction consisted of green screen. After noticing this

instruction, the subjects were expected to move the finger that was determined

during the “Get ready” instruction. The subjects either physically moved the

finger or imagined moving the finger after instruction.

4) Rest: This instruction consisted of black screen. The subjects were

expected to relax their fingers after noticing this instruction.

The total recording session was split into 50 cycles or trials. In each trial,

the first 5 seconds were used for physical/imaginary instruction. Each “Get

ready”, “Action”, and “Rest” instructions were 2 seconds longs, as illustrated in

Fig. 3. During each session, 6 breaks were included to maintain participants’

concentration.

26
Figure 2.3. Timeline for Trials in MI Experiment

2.3.3 Signal Processing

All signals were passed through a digital notch filter with a stop band of 58-

62 Hz to remove the 60 Hz noise from power lines. All signals were also passed

through a digital 0.5-110 Hz bandpass filter to remove DC noise and high

frequency noise [11].

A 1.2 s epoch of data corresponding to MI of the index fingers was extracted

from each trial starting from t=2 s and ending at t=3.2 s to avoid the vision cue

visual response. Figures 4, 5, 6, and 7 illustrate the recorded tEEG signals in

time and frequency domain. Although there were 8 tEEG channels, we only

illustrate channel 1 data which correspond to C3 region in Fig. 2. The left

columns in Fig. 4, 5, 6, and 7 correspond to the tEEG data collected during left

finger movements while the right columns correspond to the tEEG data

collected during the right finger movements. The first, second, third, fourth, and

fifth rows correspond to little finger, ring finger, middle finger, index finger, and
27
thumb, respectively. Fig. 4 and Fig. 5 plot the signals in time domain while Fig.

6 and Fig. 7 plot the signals in frequency domain. The solid black lines in Fig. 4,

5, 6, and 7 are average values in each case. Fig. 4 and Fig. 6 correspond to the

physical movement data while Fig. 5 and Fig. 7 correspond to the imaginary

movement data. There are 22 epochs in the imaginary case and 27 epochs in

the physical case. Manual classification of signals into left hand movement and

right hand movement by observing the signals in either time domain or

frequency domain is challenging. We resort to using an artificial neural network

for this classification.

CNNs have been widely implemented in classifying images into different

classes. To analyze the tEEG and EEG data using a CNN, we converted our

data to images. We applied the CNN classification algorithm to only imaginary

data. In this work, we only used C3 (channel 1, C3 tEEG and channel 2, C3

EEG) and FC2 (channel 15, FC2 tEEG and channel 16, FC2 EEG) signals. To

prepare each channel data for CNN, we evaluated the spectrograms of all

extracted data epochs. These imaginary action spectrograms were denoted

as Pi−spec , where i ranged from 1 to 22. We also computed the spectrograms of

the rest periods which corresponded to the time segment from t=0 s to t=1.2 s.

The rest period periodograms were averaged and the average was denoted as

Prest−avg. The action and average rest periodograms were evaluated for both

right and left index fingers for each of 16 channels. We then computed the final

data matrix as
28
Pi = Pi−spec − Prest−avg (1)

which is the modified imaginary action periodogram. These periodograms were

converted into images and fed into a CNN.

Figure 2.4. Time-domain Channel 1 Data Corresponding to Physical Movement.

(The left and right columns represent the data associated with left and right

fingers, respectively. The rows from top to bottom correspond to little finger, ring

finger, middle finger, index finger, and thumb)

29
Figure 2.5. Time-domain Channel 1 Data Corresponding to Imaginary

Movement. (The left and right columns represent the data associated with left

and right fingers, respectively. The rows from top to bottom correspond to little

finger, ring finger, middle finger, index finger, and thumb)

30
Figure 2.6. Frequency-domain Channel 1 Data Corresponding to Physical

Movement. (The left and right columns represent the data associated with left

and right fingers, respectively. The rows from top to bottom correspond to little

finger, ring finger, middle finger, index finger, and thumb)

31
Figure 2.7. Frequency-domain Channel 1 Data Corresponding to Imaginary

Movement. (The left and right columns represent the data associated with left

and right fingers, respectively. The rows from top to bottom correspond to little

finger, ring finger, middle finger, index finger, and thumb)

2.3.4 Deep Learning

CNN is an artificial neural network which can have many layers for deep

learning from data. It has the ability of detecting the hidden patterns in input data

such as medical images, audio signals, etc. A CNN has one input layer,
32
customizable hidden layers, and one output layer. The hidden K layers in a CNN

can be further categorized into two types:

1. feature learning layers

2. classification layers.

These layers are shown in Fig. 8. This CNN architecture was previously used

in [20], [24]. When the number of hidden layers is more than 2, a CNN is called

a deep network. The feature learning layers in our CNN have three types of

layers. They are:

1. convolutional layer

2. activation layer

3. pooling layer.

Figure 2.8 CNN Architecture

The convolutional layers in a CNN consist of weights and biases. The

weights and biases are assigned initial values. These values are then learned

33
and updated during the training phase using labeled data. The prime operation

involved in a CNN algorithm is convolution which is given as

(𝑖, 𝑗) = (𝐾 ∗ 𝐼)(𝑖, 𝑗) = ∑∑𝐼(𝑖 − 𝑚, 𝑗 − 𝑛)𝐾(𝑚, 𝑛) (2) [25]

where I(i, j) is the input to a convolution function, K(i, j) is the filter impulse

response, and S(i, j) is the convolution output. The operation is performed by a

multidimensional filter which is a linear time-invariant system. The elements of

the matrix K are also called filter weights. These filter weights are the values

learned during the training phase. The matrix K is also referred to as the filter

kernel. In this work, CNN was implemented using Deep Learning Toolbox in

MATLAB. All input data matrices, i.e. spectrogram matrices S(i, j), were stored

in the 129 × 7 grayscale image format and fed into CNN. For feature learning

layers, our network architecture used and repeated the three types of layers.

The layers in sequence were:

1. Convolutional Layer 1

2. Activation Layer 1

3. Pooling Layer 1

4. Convolutional Layer 2

5. Activation Layer 2

6. Pooling Layer 2

7. Convolutional Layer 3

34
8. Activation Layer 3

9. Pooling Layer 3

Convolution layers characterized by the filters’ sizes and numbers, were

used to highlight different data features with the corresponding filter kernels.

The network first performed the convolution between the input layer I and the

first kernel K with size m × n with Convolutional Layer 1. We used 32 filters of

size 50 × 2 in the first layer. Convolutional Layer 1 converted the size of the

neurons from 129 × 7 to 129 × 7 × 50.

The output of Convolutional Layer 1 was applied to Activation Layer 1. We

used rectified exponential linear unity (ReLU) as the activation function in this

work. The purpose of the activation function is to add non-linearity in the

algorithm. For any variable x, the ReLU function is computed as R(x) =

max(0, x). Activation Layer 1 does not change the size of the neurons. The first

ReLU activation layer was followed by Pooling Layer 1. Max, sum, and average

pooling functions are commonly employed in a CNN. We used max pooling

functions in this work. Max pooling layers reduce the number of neurons by the

maximum operation. We used the pooling function to reduce each 2 × 2 matrix

of neurons to 1 neuron. As shown in Fig. 8, the pooling layer works

independently on each depth slice. Thus, the output of Pooling Layer 1 was of

size 64×4×50. The pooling layer was followed by Convolutional Layer 2, which

consisted of 64 filters. The output neurons were of size 64 × 4 × 64. The second

35
convolutional layer was then followed by Activation Layer 2 and Pooling Layer

2. The output neurons of Pooling Layer were of size 32 × 2 × 64. The second

max pooling layer was followed by Convolutional Layer 3 consisting of 128 filters,

which converted the size of the neurons to 32 × 2 × 128. Convolutional Layer 3

was followed by Activation Layer 3, and Pooling Layer 3. The outputs of the

third pooling layer were of size 16 × 1 × 128. These neurons were fed to the

classification layers. Since as the neurons passed through the hidden layers,

their length and width were reduced while the depth was increased, the neurons

progressively acquired a bipyramidal structure.

The classification layer consisted of a flattening layer, a fully connected

layer, and a softmax layer as illustrated in Fig. 8. The flattening layer converted

the output of the final pooling layer to a vector. The vector was then sent to the

fully connected layer to combine all the extracted features that were learned

from the preceding layers to identify the larger patterns. We classified the data

into 2 classes, which were right index and left index MI. Softmax normalized the

output from fully connected layer and evaluated the probabilities of the two

classes for each input data set. Each output element of the softmax layer

corresponds to probability of a class, which is computed as

𝑠𝑜𝑓𝑡𝑚𝑎𝑥(𝑦𝑖 ) = 𝑒 𝑦𝑖 / ∑𝐶𝑖=1 𝑒 𝑦𝑖 (3)

36
where 𝑦𝑖 is the ith element at the input of the softmax layer and 𝐶 is the number

of classes. The class with the highest probability in the output of the softmax

layer is deemed the correct class [26].

2.4 Result

Fig. 9 shows the average training accuracies of finger classification by

employing the CNN algorithm. We labeled 2 classification classes as the right

and left index finger MI. For each training trials, we used randomly selected 50%

of the data for training and the remaining data for validation. The CNN algorithm

was able to classify right and left index movements in 5 out of 7 novice subjects.

Two subjects failed to reach the 70% of accuracy, which can be attributed to

having corrupted data. The classification was performed using signals captured

by electrodes located at C3 and FC2. For each set of data, the training process

was repeated 50 times and the average of the training accuracies was

computed. For all valid subjects, the training accuracies from tEEG is higher

than EEG. The highest accuracy achieved with tEEG and EEG signals were

96.94% and 94.94%, respectively.

37
Figure 2.9. Training Classification Accuracy of Channels C3 and FC2. (Solid line

represents tEEG data and dot line represents EEG data and red line indicates

C3 electrode while purple line indicates FC2 electrode.)

TABLE I shows the mean training accuracies of all 5 validation subjects

using tEEG and EEG signal as an input. The classifying accuracies reached an

average of 86.69% with standard deviation SD=7.3 with tEEG signals and 81.77%

with SD=6.89 for EEG signals. A paired sample t-test with a significant level

alpha=0.05 was also used to compare the mean accuracies of tEEG and EEG.

The purpose of the test was to determine whether there was statistical evidence

that the average tEEG accuracy and the average EEG accuracy were different

[27]. The test resulted in the p-value of 0.004046. Since the p-value is less than

0.05, we can conclude that the training accuracy from using tEEG signals was

statistically greater than the accuracy of EEG signals.

38
Table 2.1 Training Accuracy Result for MI Index Finger

Signal type N Mean SD

tEEG 10 86.69 7.30

EEG 10 81.77 6.90

2.5 Conclusion

tEEG and EEG data were acquired from healthy participants. The

spectrograms of the data epochs corresponding to imaginary index finger

movements were computed. The spectrograms for the data epochs acquired

during the rest phase were also computed and averaged across many trials.

The average rest phase spectrograms were subtracted from the imaginary

movement spectrograms and used as CNN inputs. The CNN with 3

convolutional layers, 3 ReLU activation layers, and 3 max pooling layers were

able to classify the tEEG and EEG signals into left and right fingers. The average

accuracy obtained with tEEG was significantly higher than the average accuracy

with EEG.

List of Preferences

[1] N. Robinson, K. P. Thomas, and A. P. Vinod, “Neurophysiological

predictors and spectro-spatial discriminative features for enhancing SMR-BCI,”

J Neural Eng, vol. 15, no. 6, p. 066032, Oct. 2018, doi: 10.1088/1741-

2552/AAE597.

39
[2] S. M. Hosni, S. B. Borgheai, J. McLinden, and Y. Shahriari, “An fNIRS-

Based Motor Imagery BCI for ALS: A Subject-Specific Data-Driven Approach,”

IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 28,

no. 12, pp. 3063–3073, Dec. 2020, doi: 10.1109/TNSRE.2020.3038717.

[3] N. Tibrewal, N. Leeuwis, and M. Alimardani, “Classification of motor

imagery EEG using deep learning increases performance in inefficient BCI

users,” PLoS One, vol. 17, no. 7, Jul. 2022, doi:

10.1371/JOURNAL.PONE.0268880.

[4] R. Alazrai, H. Alwanni, and M. I. Daoud, “EEG-based BCI system for

decoding finger movements within the same hand,” Neurosci Lett, vol. 698, pp.

113–120, Apr. 2019, doi: 10.1016/J.NEULET.2018.12.045.

[5] R. Chin-Hao Chen et al., “Decoding ECoG signal into 3D hand translation

using deep learning,” J Neural Eng, vol. 19, no. 2, p. 026023, Mar. 2022, doi:

10.1088/1741-2552/AC5D69.

[6] L. Yao et al., “A Stimulus-Independent Hybrid BCI Based on Motor

Imagery and Somatosensory Attentional Orientation,” IEEE Transactions on

Neural Systems and Rehabilitation Engineering, vol. 25, no. 9, pp. 1674–1682,

Sep. 2017, doi: 10.1109/TNSRE.2017.2684084.

[7] R. Foong et al., “Assessment of the Efficacy of EEG-Based MI-BCI with

Visual Feedback and EEG Correlates of Mental Fatigue for Upper-Limb Stroke

40
Rehabilitation,” IEEE Trans Biomed Eng, vol. 67, no. 3, pp. 786–795, Mar. 2020,

doi: 10.1109/TBME.2019.2921198.

[8] R. Leeb, H. Sagha, R. Chavarriaga, and J. D. R. Millán, “Multimodal

fusion of muscle and brain signals for a Hybrid-BCI,” 2010 Annual International

Conference of the IEEE Engineering in Medicine and Biology Society, EMBC’10,

pp. 4343–4346, 2010, doi: 10.1109/IEMBS.2010.5626233.

[9] D. Digitalcommons@uri and C. Toole, “Source Localization of Tripolar

Electroencephalography,” Open Access Dissertations, Jan. 2018, doi:

10.23860/diss-toole-christopher-2018.

[10] I. Daly, N. Nicolaou, S. J. Nasuto, and K. Warwick, “Automated artifact

removal from the electroencephalogram: a comparative study,” Clin EEG

Neurosci, vol. 44, no. 4, pp. 291–306, Oct. 2013, doi:

10.1177/1550059413476485.

[11] W. G. Besio et al., “High-Frequency Oscillations Recorded on the Scalp

of Patients With Epilepsy Using Tripolar Concentric Ring Electrodes,” IEEE J

Transl Eng Health Med, vol. 2, 2014, doi: 10.1109/JTEHM.2014.2332994.

[12] K. Koka and W. G. Besio, “Improvement of spatial selectivity and

decrease of mutual information of tri-polar concentric ring electrodes,” J

Neurosci Methods, vol. 165, no. 2, pp. 216–222, Sep. 2007, doi:

10.1016/J.JNEUMETH.2007.06.007.

41
[13] X. Liu and W. Besio, “Comparisons of the Spatial Resolution between

Disc and Tripolar Concentric Ring Electrodes”, Accessed: Feb. 14, 2022.

[Online]. Available:

http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.682.1705

[14] A. J. Izenman, “Linear Discriminant Analysis,” pp. 237–280, 2013, doi:

10.1007/978-0-387-78189-1_8.

[15] M. H. Moradi, “A novel metaheuristic optimization method for robust

spatial filter designation and classification of speech imagery tasks in EEG

Brain-Computer Interface,” Artificial Intelligence-Based Brain-Computer

Interface, pp. 237–259, Jan. 2022, doi: 10.1016/B978-0-323-91197-9.00009-6.

[16] S. S. Nisha and M. N. Meeral, “Applications of deep learning in

biomedical engineering,” Handbook of Deep Learning in Biomedical

Engineering: Techniques and Applications, pp. 245–270, Jan. 2021, doi:

10.1016/B978-0-12-823014-5.00008-9.

[17] H. Cecotti and A. Gräser, “Convolutional neural networks for P300

detection with application to brain-computer interfaces,” IEEE Trans Pattern

Anal Mach Intell, vol. 33, no. 3, pp. 433–445, 2011, doi:

10.1109/TPAMI.2010.125.

[18] M. M. al Rahhal, Y. Bazi, M. al Zuair, E. Othman, and B. BenJdira,

“Convolutional Neural Networks for Electrocardiogram Classification,” J Med

42
Biol Eng, vol. 38, no. 6, pp. 1014–1025, Dec. 2018, doi: 10.1007/S40846-018-

0389-7/TABLES/5.

[19] P. Thodoroff, J. Pineau, and A. Lim, “Learning Robust Features using

Deep Learning for Automatic Seizure Detection,” 2016.

[20] K. Adhikari, T. Pham, J. Hall, A. Rotenberg, and W. G. Besio, “Language

Mapping using tEEG and EEG Data with Convolutional Neural Networks,”

Proceedings of the Annual International Conference of the IEEE Engineering in

Medicine and Biology Society, EMBS, vol. 2022-July, pp. 4060–4063, 2022, doi:

10.1109/EMBC48229.2022.9871190.

[21] J. R. Stieger, S. A. Engel, D. Suma, and B. He, “Benefits of deep learning

classification of continuous noninvasive brain–computer interface control,” J

Neural Eng, vol. 18, no. 4, Jun. 2021, doi: 10.1088/1741-2552/AC0584.

[22] W. Besio, R. Aakula, K. Koka, and W. Dai, “Development of a Tri-polar

concentric ring electrode for acquiring accurate laplacian body surface

potentials,” Ann Biomed Eng, vol. 34, no. 3, pp. 426–435, Feb. 2006, doi:

10.1007/S10439-005-9054-8/FIGURES/10.

[23] W. G. Besio, K. Koka, R. Aakula, and W. Dai, “Tri-polar concentric ring

electrode development for laplacian electroencephalography,” IEEE Trans

Biomed Eng, vol. 53, no. 5, pp. 926–933, May 2006, doi:

10.1109/TBME.2005.863887.

43
[24] K. Adhikari, “Shift-Invariant Structure-Imposed Convolutional Neural

Networks for Direction of Arrival Estimation,” 2022 IEEE World AI IoT Congress,

AIIoT 2022, pp. 179–186, 2022, doi: 10.1109/AIIOT54504.2022.9817278.

[25] I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning. MIT Press,

2016. Accessed: Feb. 11, 2023. [Online]. Available:

http://www.deeplearningbook.org

[26] “Classification output layer - MATLAB classificationLayer.”

https://www.mathworks.com/help/deeplearning/ref/classificationlayer.html

(accessed Feb. 11, 2023).

[27] L. Ott and M. Longnecker, “An introduction to statistical methods & data

analysis,” p. 11

44
BIBLIOGRAPHY

J. R. Binder, S. J. Swanson, T. A. Hammeke and D. Sabsevitz, "A comparison

of five fMRI protocols for mapping speech comprehension systems,"

Epilepsia, vol. 49, no. 12, pp. 1980-1997, 2008.

S. Knecht, B. Dräger, M. Deppe, L. Bobe, H. Lohmann, A. Flöel, E. B.

Ringelstein and H. Henningsen, "Handedness and hemispheric language

dominance in healthy humans," Brain, vol. 123, no. 12, pp. 2512-2518,

2000.

H. O. Lüders, I. Najm, D. Nair, P. Widdess-Walsh and W. Bingman, "The

epileptogenic zone: general principles.," Epileptic disorders : international

epilepsy journal with videotape, vol. 8 Suppl 2, pp. S1-9, 8 2006.

R. Arya and J. A. Wilson, "Real-time mapping of natural speech in children with

drug-resistant epilepsy," in Brain-computer interface research: a state-of-

the-art summary, 1, Ed., New York: Springer International Publishing,

2015, pp. 9-17.

Z. Wang, W. Yan and T. Oates, "Time series classification from scratch with

deep neural networks: A strong baseline.," International Joint Conference

on Neural Networks, pp. 1578-1585.

B. Zhao, H. Lu, S. Chen, J. Liu and D. Wu, "Convolutional neural networks for

time series classification," Journal of Systems Engineering and

Electronics, vol. 28, no. 1, pp. 162-169, 2017.

45
I. Daly, N. Nicolaou, s. J. Nasuto and K. Warwick, "Automated artifact removal

from the electroencephalogram: a comparative study," Clin EEG

Neurosci., vol. 44, pp. 291-306, 2013.

O. Makeyev, Y. Boudria, Z. Zhu, T. Lennon and W. Besio, "Emulating

conventional disc electrode with the outer ring of the tripolar concentric

ring electrode in phantom and human electroencephalogram data," in

IEEE Signal Processing in Medicine and Biology Symposium, NYC, 2013.

W. Besio, K. Koka, R. Aakula and W. Dai, "Tri-polar concentric electrode

development for high resolution eeg laplacian electroencephalography

using tri-polar concentric ring electrodes," IEEE Trans BME, vol. 53, no.

5, pp. 926-933, 2006.

H. Cao, W. Besio, S. Jones and A. Medvedev, "Improved separability of dipole

sources by tripolar versus conventional disk electrodes: a modeling study

using independent component analysis," in Proceedings of the 31st IEEE

EMBS Annual International Conference, Minneapolis, MN, 2009.

P. Bashivan, I. Rish, M. Yeasin and N. Codella, "Learning Representations from

EEG with Deep Recurrent-Convolutional Neural Networks," arXiv, 2016.

H. Cecotti and A. Graser, "Convolutional neural networks for P300 detection

with application to brain-computer interfaces," IEEE Trans Pattern Anal

Mach Intell, vol. 33, pp. 433-445, 2011.

46
M. Hajinoroozi, Z. Mao, T.-P. Jung, C.-T. Lin and Y. Huang, "EEG based

prediction of driver’s cognitive performance by deep convolutional neural

network," Signal Process Image Communication, vol. 47, pp. 549-555,

2016.

S. Stober, D. J. Cameron and J. A. Grahn, "Using Convolutional Neural

Networks to Recognize Rhythm Stimuli from Electroencephalography

Recordings," in International Conference on Neural Information

Processing Systems, 2014.

X. Sun, C. Qian, Z. Chen, Z. Wu, B. Luo and G. Pan, "Remembered or

forgotten?–An EEG-based computational prediction approach," in PLoS

ONE 11:e0167497., 2016.

Y. R. Tabar and U. Halici, "A novel deep learning approach for classification of

EEG motor imagery signals," Neural Eng, vol. 14.

P. Thodoroff, J. Pineau and A. Lim, "Learning Robust Features using Deep

Learning for Automatic Seizure Detection," in JMLR Workshop and

Conference Proceedings, volume 56., 2016.

U. R. Acharya, S. L. Oh, Y. Hagiwara, J. H. Tan, H. Adeli and S. D. P,

"Automated EEG-based screening of depression using deep

convolutional neural network," Computer Methods and Programs in

Biomedicine, vol. 161, pp. 103-113, 2918.

47
J. Liang, R. Lu, C. Zhang and F. Wang, "Predicting Seizures from

Electroencephalography Recordings: A Knowledge Transfer Strategy,"

2016.

R. Manor and A. B. Geva, "Convolutional neural network for multi-category rapid

serial visual presentation BCI," Front Comput Neurosci, vol. 9, 2015.

A. Page, C. Shea and T. Mohsenin, "Wearable seizure detection using

convolutional neural networks with transfer learning," in IEEE

International Symposium on Circuits and Systems (ISCAS), 2016.

S. Haykin, Neural Networks and Learning Machines, Pearson.

M. Korostenskaja, A. J. Wilson, D. F. Rose, P. Brunner, G. Schalk and et al,

"Real-time functional mapping with electrocorticography in pediatric

epilepsy: comparison with fMRI and ESM findings," Clin EEG Neurosci,

vol. 45, no. 3, pp. 205-2011, 2014.

K. J. Miller, T. J. Abel, A. O. Hebb and J. g. Ojemann, "Rapid online language

mapping with electrocorticography," J Neurosurg Pediatr, vol. 7, no. 5, pp.

482-490, 2011.

T. Hastie, R. Tibshirani and J. Friedman, The Elements of Statistical Learning:

Data Mining, Inference, and Prediction, Springer, 2008.

N. Robinson, K. P. Thomas, and A. P. Vinod, “Neurophysiological predictors

and spectro-spatial discriminative features for enhancing SMR-BCI,” J

48
Neural Eng, vol. 15, no. 6, p. 066032, Oct. 2018, doi: 10.1088/1741-

2552/AAE597.

S. M. Hosni, S. B. Borgheai, J. McLinden, and Y. Shahriari, “An fNIRS-Based

Motor Imagery BCI for ALS: A Subject-Specific Data-Driven Approach,”

IEEE Transactions on Neural Systems and Rehabilitation Engineering,

vol. 28, no. 12, pp. 3063–3073, Dec. 2020, doi:

10.1109/TNSRE.2020.3038717.

N. Tibrewal, N. Leeuwis, and M. Alimardani, “Classification of motor imagery

EEG using deep learning increases performance in inefficient BCI users,”

PLoS One, vol. 17, no. 7, Jul. 2022, doi:

10.1371/JOURNAL.PONE.0268880.

R. Alazrai, H. Alwanni, and M. I. Daoud, “EEG-based BCI system for decoding

finger movements within the same hand,” Neurosci Lett, vol. 698, pp.

113–120, Apr. 2019, doi: 10.1016/J.NEULET.2018.12.045.

R. Chin-Hao Chen et al., “Decoding ECoG signal into 3D hand translation using

deep learning,” J Neural Eng, vol. 19, no. 2, p. 026023, Mar. 2022, doi:

10.1088/1741-2552/AC5D69.

L. Yao et al., “A Stimulus-Independent Hybrid BCI Based on Motor Imagery and

Somatosensory Attentional Orientation,” IEEE Transactions on Neural

Systems and Rehabilitation Engineering, vol. 25, no. 9, pp. 1674–1682,

Sep. 2017, doi: 10.1109/TNSRE.2017.2684084.

49
R. Foong et al., “Assessment of the Efficacy of EEG-Based MI-BCI with Visual

Feedback and EEG Correlates of Mental Fatigue for Upper-Limb Stroke

Rehabilitation,” IEEE Trans Biomed Eng, vol. 67, no. 3, pp. 786–795, Mar.

2020, doi: 10.1109/TBME.2019.2921198.

R. Leeb, H. Sagha, R. Chavarriaga, and J. D. R. Millán, “Multimodal fusion of

muscle and brain signals for a Hybrid-BCI,” 2010 Annual International

Conference of the IEEE Engineering in Medicine and Biology Society,

EMBC’10, pp. 4343–4346, 2010, doi: 10.1109/IEMBS.2010.5626233.

D. Digitalcommons@uri and C. Toole, “Source Localization of Tripolar

Electroencephalography,” Open Access Dissertations, Jan. 2018, doi:

10.23860/diss-toole-christopher-2018.

W. G. Besio et al., “High-Frequency Oscillations Recorded on the Scalp of

Patients With Epilepsy Using Tripolar Concentric Ring Electrodes,” IEEE

J Transl Eng Health Med, vol. 2, 2014, doi:

10.1109/JTEHM.2014.2332994.

K. Koka and W. G. Besio, “Improvement of spatial selectivity and decrease of

mutual information of tri-polar concentric ring electrodes,” J Neurosci

Methods, vol. 165, no. 2, pp. 216–222, Sep. 2007, doi:

10.1016/J.JNEUMETH.2007.06.007.

X. Liu, and W. Besio, “Comparisons of the Spatial Resolution between Disc and

Tripolar Concentric Ring Electrodes”, Accessed: Feb. 14, 2022. [Online].

50
Available:

http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.682.1705

A. J. Izenman, “Linear Discriminant Analysis,” pp. 237–280, 2013, doi:

10.1007/978-0-387-78189-1_8

M. H. Moradi, “A novel metaheuristic optimization method for robust spatial filter

designation and classification of speech imagery tasks in EEG Brain-

Computer Interface,” Artificial Intelligence-Based Brain-Computer

Interface, pp. 237–259, Jan. 2022, doi: 10.1016/B978-0-323-91197-

9.00009-6.

S. S. Nisha and M. N. Meeral, “Applications of deep learning in biomedical

engineering,” Handbook of Deep Learning in Biomedical Engineering:

Techniques and Applications, pp. 245–270, Jan. 2021, doi:

10.1016/B978-0-12-823014-5.00008-9.

M. M. al Rahhal, Y. Bazi, M. al Zuair, E. Othman, and B. BenJdira,

“Convolutional Neural Networks for Electrocardiogram Classification,” J

Med Biol Eng, vol. 38, no. 6, pp. 1014–1025, Dec. 2018, doi:

10.1007/S40846-018-0389-7/TABLES/5.

K. Adhikari, T. Pham, J. Hall, A. Rotenberg, and W. G. Besio, “Language

Mapping using tEEG and EEG Data with Convolutional Neural Networks,”

Proceedings of the Annual International Conference of the IEEE

51
Engineering in Medicine and Biology Society, EMBS, vol. 2022-July, pp.

4060–4063, 2022, doi: 10.1109/EMBC48229.2022.9871190.

J. R. Stieger, S. A. Engel, D. Suma, and B. He, “Benefits of deep learning

classification of continuous noninvasive brain–computer interface control,”

J Neural Eng, vol. 18, no. 4, Jun. 2021, doi: 10.1088/1741-2552/AC0584.

W. Besio, R. Aakula, K. Koka, and W. Dai, “Development of a Tri-polar

concentric ring electrode for acquiring accurate laplacian body surface

potentials,” Ann Biomed Eng, vol. 34, no. 3, pp. 426–435, Feb. 2006, doi:

10.1007/S10439-005-9054-8/FIGURES/10.

K. Adhikari, “Shift-Invariant Structure-Imposed Convolutional Neural Networks

for Direction of Arrival Estimation,” 2022 IEEE World AI IoT Congress,

AIIoT 2022, pp. 179–186, 2022, doi: 10.1109/AIIOT54504.2022.9817278.

I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning. MIT Press, 2016.

Accessed: Feb. 11, 2023. [Online]. Available:

http://www.deeplearningbook.org

“Classification output layer - MATLAB classificationLayer.”

https://www.mathworks.com/help/deeplearning/ref/classificationlayer.ht

ml (accessed Feb. 11, 2023).

L. Ott and M. Longnecker, “An introduction to statistical methods & data

analysis,” p. 11

52

You might also like