Professional Documents
Culture Documents
Classification_of_Hand_Movements_from_EE
Classification_of_Hand_Movements_from_EE
Classification_of_Hand_Movements_from_EE
Abstract—Classifying limb movements using brain activity is analyzing EEG signals [5]. Additionally, such technologies
an important task in Brain-computer Interfaces (BCI) that has allow for patients with disabilities to control the movements
been successfully used in multiple application domains, ranging of artificial limbs or exoskeletons. In particular, in order to
arXiv:1908.02252v2 [cs.LG] 31 Oct 2019
detailed analysis of brain activity through the different stages EEG classification naturally uses a dataset separate from the
of stimuli perception and hand movement, and demonstrate movement dataset, and hence, a direct comparison between
that EEG information flow through the senor pairs are in the two tasks is not valid. However, a review of the methods
correspondence with the known and expected neurological used for imagery classification can provide further inspiration
function of the brain. for new solutions for movement classification.
Table I summarizes the main works in the area and char-
acteristics of the proposed solutions in terms of method and
II. R ELATED W ORK
classification tasks. The table also includes information about
The majority of the related work on hand movement clas- the datasets and validation protocols and schemes considered
sification has focused on intra-subject validation. Employing by these methods for performance assessment. The solutions
this approach often stems from the fact that distinguishing are sorted according to their publication date. For comparison,
between L/R hand movements can be a challenging task due the characteristics of our method proposed in this paper are
to the highly subject-dependant nature of brain activities in also included in Table I. The Table points to two clear areas in
the visual and motor cortex. Several conventional machine the literature that merit additional investigation and research:
learning methods have been employed using this approach, ‚ First, most of the works have performed intra-subject
for instance, in [10], an average accuracy of 64.02% was validation, while the more challenging task of cross-
reported using a Common Spatial Patterns (CSP) approach subject validation (which is also more indicative of gen-
for 10 subjects. In [11], an average accuracy of 88.69% eralization) has been largely disregarded.
was reported using QDA, while a rough set-based classifier ‚ Second, most of the prior works have utilized classical
was used in [12] and [13], reporting average accuracies of machine learning models, indicating room for improve-
60% and 68% respectively. However, the aforementioned ment using more advanced deep learning techniques.
traditional classifiers often cannot model the non-linearities
observed in high-dimensional multi-channel EEG and feature- III. P ROPOSED M ETHOD
sets extracted from the data. This especially becomes an issue In this section, we present the proposed method, including
when attempting to model cross-subject relationships within the pre-processing steps, feature extraction, and the deep
the dataset. learning solution. In the rest of this paper, the description
Training a single model capable of learning to classify of notations used is as follow: ‘a’ represents a scalar, ‘a’
hand movements from EEG and generalizing the learned represents a vector, ‘A’ represents a matrix.
input-output relationships to the entire dataset (cross-subject)
is quite challenging. The results presented in [14] showed A. Pre-processing
that the average accuracy dropped from 87% to chance level
Data pre-processing was performed to reduce the negative
(50%), when utilizing a cross-subject scheme instead of an
effects of signal artifacts, including cross-talk, noise, and
intra-subject one, using the proposed Maximum Discernibility
power-line interference. In this context, out of the 64 EEG
Algorithm (MDA). Moreover, a large-scale synchronization
channels available in the EEG test material, the 10 central
analysis using Phase Locking Value (PLV) in [15], defined a
sensors were discarded due to their non-symmetric nature [23].
criteria that could successfully distinguish between L/R hand
The utilized sensor pairs were selected based on the topology
movement and obtained an average accuracy of 78.95%. Other
described in [23]. Then, two filters were applied to the new
than conventional machine learning classifiers and statistic
27 differential EEG channels: a notch filter removed 50 Hz
analysis, deep learning techniques have been employed for this
power line interference and a band-pass filter was applied
task. For instance, in one of the few deep learning solutions, a
to allow a frequency range of 0.5 ´ 70 Hz to pass through,
set of time and frequency domain features were extracted and
thus minimizing artifacts such as noise often present in this
fed to an Artificial Neural Network (ANN) [16], achieving an
frequency range [24]. Normalization of EEG amplitude was
accuracy of 68%.
then carried out as the last step to minimize the difference
In addition to the task of hand movement classification,
in EEG amplitudes using min-max normalization with across
the task of imagery classification focuses on mental activity
different subjects.
when imagining left and right hand movements as opposed to
physically performing them. In this context, several classical
machine learning methods such as Logistic Regression [17], B. Time and Frequency Domain Feature Extraction
k-Nearest Neighbor (kNN) [18], and Naı̈ve Bayes (NB) [18] Successive to pre-processing, a number of time and fre-
have been used with intra-subject validation. In addition to quency domain features were extracted in order to be used
classical machine learning methods, several deep learning as inputs for the proposed method. EEG is known as a non-
solutions have also been utilized. For example, in [19], [20], a stationary time-series signal where nonlinear features are often
Convolutional Neural Network (CNN) was used. An LSTM- used for representation and classification tasks. Feature extrac-
based model was proposed in [21], outperforming the state- tion was performed on a 2-second segments for each trial. The
of-the-art solutions including methods based on other deep effect of different segment sizes on the performance of hand
networks. In cross-subject validation, a very high accuracy was movement classification task will be studied in Section 5. Time
achieved using a parallel or cascade combination of CNN and and frequency domain features were subsequently extracted
LSTM networks [22]. It should be mentioned that the imagery from each time-step. Time-domain features included: i) mean,
3
TABLE I
T HE M AIN C HARACTERISTICS OF THE R ELATED W ORK
1) M-EEG DENOTES PHYSICAL LEFT / RIGHT HAND MOVEMENTS , WHILE IM-EEG DENOTES IMAGERY LEFT / RIGHT HAND MOVEMENTS IN THE P HYSIO N ET EEG M OTOR
M OVEMENT /I MAGERY DATASET; 2) BCI Comp DENOTES THE IMAGERY DATASET USED IN THE BCI C OMPETITION ; 3) Comp-IV.2a AND Comp-IV.2b DENOTE DATASET 2 A AND
DATASET 2 B OF THE BCI C OMPETITION IV, RESPECTIVELY; 4) Comp-II.III DENOTES DATASET III IN THE BCI C OMPETITION II.
Output
Output
xt Input s1 s2 s3 st
T0 1 2 3 4 5 6 7 T1
TP ` TN
Fig. 3. An overview of the EEG data, the movement segments (2-second Accuracy “ . (12)
TP ` TN ` FP ` FN
long LSTM sequence consists of 7 time steps with 50% overlap between
adjacent windows), and the sliding window used during training/classification 1) Solutions for Cross-Subject Scheme: Solutions from re-
is illustrated.
lated works include: PLV [15] and ANN [16], which have been
discussed in Section II. As studies performing cross-subject
validation on this challenging dataset were not very common,
B. LSTM Hyper-Parameters Setting we implemented a number of other techniques including the
classical machine learning and deep learning solutions that
A number of hyper-parameters for the network were ex- have been successfully implemented in the imagery task (as
plored and tuned to achieve the best results for our proposed mentioned in the related work) to better compare with our
model. Notably, these hyper-parameters include: recurrent proposed solution. The conventional benchmarks include SVM
depth, batch size, number of training epochs, LSTM hidden [18], Naı̈ve Bayes [18], decision tree [33], logistic regression
layer size, dropout rates applied after input layer and the [17], and random forest [34]. The SVM used an 8th degree
following three stacked LSTM layers (D0 , D1 , D2 , D3 ), and polynomial kernel and the random forest used 30 estimators
the weight matrix L2 regularization coefficient of each LSTM up to a depth of 2. These parameters were tuned empirically in
layer. Additionally, some hyper-parameters were tuned for order to achieve the best results. The benchmarking solutions
the stochastic Adam optimizer [32], such as learning rate included deep learning methods as well. First, we used a
(lr ) and exponential decay rates for the first and second 3´layer 2D-CNN, accepting a 2D matrix of 297 features as
movement estimates (β1 and β2 ). The optimum values for inputs. The network had a kernel size of 3 ˆ 3, and feature
these parameters are presented in Table III. A different set of maps of 32, 64, and 128, respectively for the first, second, and
parameters was assigned for each validation scheme (cross- third convolutional layers. A VGG-16 CNN was also used for
subject and intra-subject) to maximize performance. A binary benchmarking. This model was pre-trained on ImageNet [35]
cross-entropy loss function L “ ´y logppq`p1´yq logp1´pq and fine-tuned for our application. This model was presented
was employed for training. with inputs in the form of 3D matrices, which were achieved
vspace-2mm by re-sizing the feature matrices to 180 ˆ 150 ˆ 1 using linear
interpolation. The output of the VGG-16 convolution layers
was followed by 2 fully connected layers that used ReLu
C. Validation Protocols and Benchmarking activation and a final output layer with sigmoid activation for
estimating the class probabilities. Lastly, an LSTM network
To rigorously evaluate the performance for our proposed
without the attention mechanism was also used for benchmark-
solution, we utilized both intra-subject and cross-subject val-
ing. In this model, similar hyperparameters as our proposed
idation schemes. Both schemes used 10-fold cross-validation,
attention-based method were used (see Table III).
where no overlap existed in the training and testing segments
at each fold, as previous studies with such overlaps have 2) Solutions for Intra-Subject Scheme: As discussed earlier,
shown to yield very high accuracies as expected [22]. True the main goal for this work is to tackle the more challenging
Positive (TP), False Negative (FN), False Positive (FP), and task of cross-subject generalization. However, to further test
True Negative (TN) were used to calculate the performance our method and compare to the state-of-the-art, we compared
6
TABLE IV
E FFECT OF S EGMENT S IZE
Size (seconds) 0.25 0.5 0.75 1.0 1.25 1.5 1.75 2.0
Accuracy ˘ SD 82.18 ˘ 2.1 80.23 ˘ 1.8 79.1 ˘ 1.5 80.3 ˘ 1.9 80.8 ˘ 1.7 81.1 ˘ 0.9 81.2 ˘ 2.0 83.2 ˘ 1.2
TABLE V TABLE VI
C OMPARISON OF D IFFERENT M ETHODS USING C ROSS - SUBJECT S CHEME C OMPARISON OF D IFFERENT M ETHODS USING I NTRA - SUBJECT S CHEME
A.
Rank 1 Feature Rank 2 Feature Rank 3 Feature
ROC curve 4
Feature Value
1.0 2
-2
0.8 -4
R L R L R L
F7-F8 Sk F7-F8 Mean F7-F8 Area
B.
True positive rate
0.2 1
LSTM+Attention (area = 0.908)
LSTM (area = 0.866)
2D-CNN (area = 0.635)
0.5
er
sis
er
ZC
ss
k
n
a
er
er
Va
2P
ea
e
w
w
w
ne
False positive rate
Ar
to
Po
Po
Po
Po
Pk
r
ew
Ku
el
el
el
el
Sk
R
R
R
R
a
ta
ta
ha
elt
Be
e
Alp
Th
D
Features Grouped by Channels
Fig. 4. The ROC curves and corresponding AUCs are presented for our
proposed LSTM network and the top-3 benchmarking solutions. Fig. 5. An overview of the extracted features is presented. (A) shows the
box-plots of the top three features. (B) shows the feature importance scores
measured by RF in addition to the significant features between the two classes,
measured by non-parametric t-test.
AF8), parietal-occipital (PO3-PO4 and PO7-PO8), as well
as occipital (O1-O2) lobes displayed the strongest features.
This phenomenon is consistent with previous studies reporting
that the visual cortex plays an important role in receiving
and processing visual stimuli [39], [40]. The visual cortex cortex, which contains action-relevant information [45]. This
occupies approximately 20% space of the cerebral-cortex and is also supported by previous studies claiming that parietal
is located in the occipital, parietal-posterior, and temporal lobe contributes predominantly to visual imagery and episodic
lobes [41]. Moreover, the information flow in the parietal- memory [46].
occipital and occipital regions were also consistent with the At t “ 0.5s, the active effect of sensor pairs in the
activities reported in [41]. parietal lobe diminished, which represents the completion of
The P 300 wave is a type of Event-Related Potential (ERP) information flow in the dorsal stream. Consequently, sensor
that is believed to be dominant in decision-making [42], and pairs in the central-parietal lobe (CP1-CP2, CP3-CP4 and CP5-
is usually within the range of 250ms to 500ms of the onset CP6) became even more informative. This observed pattern is
of visual stimulus [43]. Similar to the P 300 wave, Simple also consistent with previous related work claiming that the
Reaction Time (SRT) represents the delay between visual parietal lobe also contributes to hand and upper limb control
stimulus and response, during which the usage of the sensor and eye movements with visual information [47].
pairs at t “ 0.25s and t “ 0.50s shows brain activity. This At t “ 0.75s, sensor pairs in temporal (T7-T8 and T9-
is consistent with [44], reporting an average and standard T10), frontal (F7-F8) and anterior-frontal (AF7-AF8) lobes
deviation of 231 ˘ 27ms for SRT. During SRT, the previous reached their highest degree of use, thus having the highest
highly informative sensor pairs in the parietal-occipital and discriminability for L/R hand movements. This phenomenon
occipital lobes gradually disappeared, and at t “ 0.25s sensor was consistent with the fact explained by previous studies
pairs in the frontal-temporal (FT7-FT8) and temporal-parietal that the frontal lobe contains pre-motor cortex and primary
(TP7-TP8) lobes became more prevalent. The phenomenon is motor cortex (M1), where the pre-motor cortex first con-
consistent with previous studies, stating that after the receipt catenates information from the parietal and frontal lobes,
of visual stimulus in the visual cortex, visual information which is then delivered to M1. M1 is believed to be a
is transferred through two disparate streams, notably ventral generator of movement-specific commands [48]. Therefore,
stream and dorsal stream. Ventral stream eventually reaches the classification rate reached its highest peak because of
the temporal cortex, commonly known for image recognition the completed information streams in occipital-temporal and
[45]. The visual stimulus is therefore processed to make the occipital-parietal-frontal [45]. Then, due to the high usage of
association between experiment instructions and performing sensor pairs in M1, the classification rate remained relatively
L/R hand movement with the help of the relevant memory. stable until the movement ended [48].
Moreover, at t “ 0.25s sensor pairs in the central-parietal
(CP3-CP4 and CP5-CP6) lobes, as well as all the sensor Lastly, from t “ 0.75s to t “ 2.0s, the sensor pairs in
pairs in the parietal lobe (P1-P2, P3-P4 and P5-P6) became the temporal and frontal lobes maintained the highest degree
more informative compared to t “ 0s. This activity is also of usage, while those in the parietal lobe were much less
consistent with the phenomenon reported in previous studies, frequently used. The other sensor pairs in the occipital lobe
stating that the dorsal stream eventually reaches the parietal- barely contributed to the classification task.
8
VI. S UMMARY AND F UTURE W ORK of our work is the use of hand-crafted features. As a result, in
future work, feature extraction using CNNs will be explored,
A novel solution for classification of L/R hand movements which may lead to a simpler and more robust solution. Lastly,
from EEG signals was proposed. First the negative effects of to tackle the challenges in cross-subject classification, we
signal artifacts was reduced, improving the quality of data. In will employ domain adaption techniques such as Wasserstein
the next step, a wide range of time and frequency domain fea- Generative Adversarial Network Domain Adaptation [49] in
tures were exploited and used as inputs to an attention-based order to distinguish and minimize the differences among
LSTM network. After studying the optimal LSTM hyper- subjects with adversarial training.
parameter settings, extensive experiments were conducted with
the EEG Movement Database over a large number of subjects ACKNOWLEDGMENT
(103). The performance evaluation with both intra-subject The Titan XP GPU used for this research was donated by
and cross-subject validation schemes showed very effective the NVIDIA Corporation.
results with a high generalization capability, demonstrating the
superiority of the proposed approach when compared to other R EFERENCES
benchmarking models and previous state-of-the-art methods. [1] I. Käthner, S. C. Wriessnegger, G. R. Müller-Putz, A. Kübler, and
The robust performance achieved in this paper suggests that S. Halder, “Effects of mental workload and fatigue on the p300, alpha
and theta band power during operation of an erp (p300) brain–computer
the proposed approach can be used in future research for interface,” Biological Psychology, vol. 102, pp. 118–129, 2014.
a broad range of EEG-related classification tasks. Finally, a [2] S. N. Abdulkader, A. Atia, and M.-S. M. Mostafa, “Brain computer
detailed analysis of the EEG information flow through the interfacing: Applications and challenges,” Egyptian Informatics Journal,
vol. 16, no. 2, pp. 213–230, 2015.
sensors over time is presented, reflecting the brain activity [3] J. J. Daly and J. R. Wolpaw, “Brain–computer interfaces in neurological
throughout the experiment. rehabilitation,” The Lancet Neurology, vol. 7, no. 11, pp. 1032–1043,
Future work will focus on models capable of early detection 2008.
[4] J. R. Wolpaw, N. Birbaumer, D. J. McFarland, G. Pfurtscheller, and T. M.
or prediction of hand movements rather than classification Vaughan, “Brain–computer interfaces for communication and control,”
using deep neural networks. Moreover, a possible limitation Clinical Neurophysiology, vol. 113, no. 6, pp. 767–791, 2002.
9
[5] M. Takahashi, K. Takeda, Y. Otaka, R. Osu, T. Hanakawa, M. Gouko, [25] K. M. Tsiouris, V. C. Pezoulas, M. Zervakis, S. Konitsiotis, D. D.
and K. Ito, “Event related desynchronization-modulated functional elec- Koutsouris, and D. I. Fotiadis, “A long short-term memory deep learning
trical stimulation system for stroke rehabilitation: a feasibility study,” network for the prediction of epileptic seizures using EEG signals,”
Journal of neuroengineering and rehabilitation, vol. 9, no. 1, p. 56, Computers in Biology and Medicine, vol. 99, pp. 24–37, 2018.
2012. [26] P. Zhou, W. Shi, J. Tian, Z. Qi, B. Li, H. Hao, and B. Xu, “Attention-
[6] L. R. Hochberg, M. D. Serruya, G. M. Friehs, J. A. Mukand, M. Saleh, based bidirectional long short-term memory networks for relation clas-
A. H. Caplan, A. Branner, D. Chen, R. D. Penn, and J. P. Donoghue, sification,” in Proceedings of Association for Computational Linguistics,
“Neuronal ensemble control of prosthetic devices by a human with vol. 2, 2016, pp. 207–212.
tetraplegia,” Nature, vol. 442, no. 7099, p. 164, 2006. [27] Y. Wang, M. Huang, L. Zhao et al., “Attention-based lstm for aspect-
[7] G. Pfurtscheller, C. Neuper, C. Guger, W. Harkam, H. Ramoser, level sentiment classification,” in Proceedings of the 2016 conference on
A. Schlogl, B. Obermaier, and M. Pregenzer, “Current trends in graz empirical methods in natural language processing, 2016, pp. 606–615.
brain-computer interface (bci) research,” IEEE transactions on Rehabil- [28] Z. C. Lipton, J. Berkowitz, and C. Elkan, “A critical review of
itation Engineering, vol. 8, no. 2, pp. 216–219, 2000. recurrent neural networks for sequence learning,” arXiv preprint
[8] J. Cantillo-Negrete, J. Gutierrez-Martinez, R. I. Carino-Escobar, arXiv:1506.00019, 2015.
P. Carrillo-Mora, and D. Elias-Vinas, “An approach to improve the per- [29] K. Greff, R. K. Srivastava, J. Koutnı́k, B. R. Steunebrink, and J. Schmid-
formance of subject-independent bcis-based on motor imagery allocating huber, “Lstm: A search space odyssey,” IEEE Transactions on Neural
subjects by gender,” Biomedical Engineering Online, vol. 13, no. 1, p. Networks and Learning Systems, vol. 28, no. 10, pp. 2222–2232, 2017.
158, 2014. [30] G. Schalk, D. J. McFarland, T. Hinterberger, N. Birbaumer, and J. R.
[9] F. Lotte, C. Guan, and K. K. Ang, “Comparison of designs towards a Wolpaw, “Bci2000: a general-purpose brain-computer interface (BCI)
subject-independent brain-computer interface based on motor imagery,” system,” IEEE Transactions on Biomedical Engineering, vol. 51, no. 6,
in 2009 Annual International Conference of the IEEE Engineering in pp. 1034–1043, 2004.
Medicine and Biology Society, pp. 4543–4546. [31] P. PhysioToolkit, “Physionet: components of a new research resource
[10] H. Wang and D. Xu, “Comprehensive common spatial patterns with for complex physiologic signals,” Circulation. v101 i23. e215-e220.
temporal structure information of EEG data: minimizing nontask re- [32] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,”
lated EEG component,” IEEE Transactions on Biomedical Engineering, arXiv preprint arXiv:1412.6980, 2014.
vol. 59, no. 9, pp. 2496–2505, 2012. [33] O. Aydemir and T. Kayikcioglu, “Decision tree structure based classifi-
[11] O. D. Eva and A. M. Lazar, “Comparison of classifiers and statistical cation of eeg signals recorded during two dimensional cursor movement
analysis for EEG signals used in brain computer interface motor task imagery,” Journal of neuroscience methods, vol. 229, pp. 68–75, 2014.
paradigm,” International Journal of Advanced Research in Artificial [34] M. Bentlemsan, E.-T. Zemouri, D. Bouchaffra, B. Yahya-Zoubir, and
Intelligence, vol. 4, no. 1, pp. 8–12, 2015. K. Ferroudji, “Random forest and filter bank common spatial patterns
[12] P. Szczuko, “Rough set-based classification of EEG signals related for eeg-based motor imagery classification,” in 2014 5th International
to real and imagery motion,” in IEEE Signal Processing: Algorithms, Conference on Intelligent Systems, Modelling and Simulation. IEEE,
Architectures, Arrangements, and Applications (SPA), 2016, pp. 34–39. 2014, pp. 235–238.
[13] P. Szczuko, M. Lech, and A. Czyżewski, “Comparison of classification [35] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet:
,ethods for EEG signals of real and imaginary motion,” in Advances in A large-scale hierarchical image database,” in IEEE Conference on
Feature Selection for Data and Pattern Recognition. Springer, 2018, Computer Vision and Pattern Recognition (CVPR), 2009, pp. 248–255.
pp. 227–239. [36] L. Breiman, “Random forests,” Machine learning, vol. 45, no. 1, pp.
[14] P. Szczuko, “Real and imaginary motion classification based on rough 5–32, 2001.
set analysis of EEG signals for multimedia applications,” Multimedia [37] G. Forman, “An extensive empirical study of feature selection metrics
Tools and Applications, vol. 76, no. 24, pp. 25 697–25 711, 2017. for text classification,” Journal of Machine Learning Research, vol. 3,
[15] A. Loboda, A. Margineanu, G. Rotariu, and A. M. Lazar, “Discrimi- no. Mar, pp. 1289–1305, 2003.
nation of EEG-based motor imagery tasks by means of a simple phase [38] F. Lotte, A. Van Langhenhove, F. Lamarche, T. Ernest, Y. Renard, B. Ar-
information method,” International Journal of Advanced Research in naldi, and A. Lécuyer, “Exploring large virtual environments by thoughts
Artificial Intelligence, vol. 3, no. 10, 2014. using a brain–computer interface based on motor imagery and high-level
[16] N. T. M. Huong, H. Q. Linh, and L. Q. Khai, “Classification of left/right commands,” Presence: Teleoperators and Virtual Environments, vol. 19,
hand movement EEG signals using event related potentials and advanced no. 1, pp. 54–70, 2010.
features,” in International Conference on the Development of Biomedical [39] S. Zeki, J. Watson, C. Lueck, K. J. Friston, C. Kennard, and R. Frack-
Engineering in Vietnam. Springer, 2017, pp. 209–215. owiak, “A direct demonstration of functional specialization in human
[17] R. Tomioka, K. Aihara, and K.-R. Müller, “Logistic regression for single visual cortex,” Journal of neuroscience, vol. 11, no. 3, pp. 641–649,
trial EEG classification,” in Advances in Neural Information Processing 1991.
Systems, 2007, pp. 1377–1384. [40] A. D. Milner and M. A. Goodale, “Two visual systems re-viewed,”
[18] S. Bhattacharyya, A. Khasnobish, A. Konar, D. Tibarewala, and A. K. Neuropsychologia, vol. 46, no. 3, pp. 774–785, 2008.
Nagar, “Performance analysis of left/right hand movement classification [41] B. A. Wandell, S. O. Dumoulin, and A. A. Brewer, “Visual field maps
from EEG signal by intelligent algorithms,” in IEEE Symposium on in human cortex,” Neuron, vol. 56, no. 2, pp. 366–383, 2007.
Computational Intelligence, Cognitive Algorithms, Mind, and Brain [42] E. Donchin, K. M. Spencer, and R. Wijesinghe, “The mental prosthesis:
(CCMB), 2011, pp. 1–8. assessing the speed of a p300-based brain-computer interface,” IEEE
[19] Z. Tang, C. Li, and S. Sun, “Single-trial EEG classification of motor Transactions on Neural Systems and Rehabilitation Engineering, vol. 8,
imagery using deep convolutional neural networks,” Optik-International no. 2, pp. 174–179, 2000.
Journal for Light and Electron Optics, vol. 130, pp. 11–18, 2017. [43] S. Yang and F. Deravi, “On the usability of electroencephalographic
[20] Y. R. Tabar and U. Halici, “A novel deep learning approach for classi- signals for biometric recognition: A survey,” IEEE Transactions on
fication of eeg motor imagery signals,” Journal of neural engineering, Human-Machine Systems (HMS), vol. 47, no. 6, pp. 958–969, 2017.
vol. 14, no. 1, p. 016003, 2016. [44] D. L. Woods, J. M. Wyma, E. W. Yund, T. J. Herron, and B. Reed,
[21] P. Wang, A. Jiang, X. Liu, J. Shang, and L. Zhang, “Lstm-based EEG “Factors influencing the latency of simple reaction time,” Human Neu-
classification in motor imagery tasks,” IEEE Transactions on Neural roscience, vol. 9, p. 131, 2015.
Systems and Rehabilitation Engineering, vol. 26, no. 11, pp. 2086–2095, [45] M. A. Goodale and A. D. Milner, “Separate visual pathways for
2018. perception and action,” Trends in Neurosciences, vol. 15, no. 1, pp.
[22] D. Zhang, L. Yao, X. Zhang, S. Wang, W. Chen, and R. Boots, “Eeg- 20–25, 1992.
based intention recognition from spatio-temporal representations via [46] T. Pflugshaupt, M. Nösberger, K. Gutbrod, K. P. Weber, M. Linnebank,
cascade and parallel convolutional recurrent neural networks,” arXiv and P. Brugger, “Bottom-up visual integration in the medial parietal
preprint arXiv:1708.06578, 2017. lobe,” Cerebral Cortex, vol. 26, no. 3, pp. 943–949, 2014.
[23] Y.-P. Lin, C.-H. Wang, T.-P. Jung, T.-L. Wu, S.-K. Jeng, J.-R. Duann, and [47] L. Fogassi and G. Luppino, “Motor functions of the parietal lobe,”
J.-H. Chen, “EEG-based emotion recognition in music listening,” IEEE Current Opinion in Neurobiology, vol. 15, no. 6, pp. 626–631, 2005.
Transactions on Biomedical Engineering, vol. 57, no. 7, pp. 1798–1806, [48] R. P. Dum and P. L. Strick, “Motor areas in the frontal lobe of the
2010. primate,” Physiology & Behavior, vol. 77, no. 4-5, pp. 677–682, 2002.
[24] M. H. Alomari, A. Samaha, and K. AlKamha, “Automated classification [49] Y. Luo, S.-Y. Zhang, W.-L. Zheng, and B.-L. Lu, “Wgan domain adap-
of l/r hand movement EEG signals using advanced feature extraction and tation for eeg-based emotion recognition,” in International Conference
machine learning,” arXiv preprint arXiv:1312.2877, 2013. on Neural Information Processing. Springer, 2018, pp. 275–286.