Professional Documents
Culture Documents
Textbook Proceedings of The Future Technologies Conference FTC 2018 Volume 1 Kohei Arai Ebook All Chapter PDF
Textbook Proceedings of The Future Technologies Conference FTC 2018 Volume 1 Kohei Arai Ebook All Chapter PDF
https://textbookfull.com/product/proceedings-of-the-future-
technologies-conference-ftc-2018-volume-2-kohei-arai/
https://textbookfull.com/product/proceedings-of-the-future-
technologies-conference-ftc-2020-volume-1-kohei-arai/
https://textbookfull.com/product/intelligent-computing-
proceedings-of-the-2018-computing-conference-volume-2-kohei-arai/
https://textbookfull.com/product/intelligent-computing-
proceedings-of-the-2020-computing-conference-volume-1-kohei-arai/
Advances in Information and Communication Networks:
Proceedings of the 2018 Future of Information and
Communication Conference (FICC), Vol. 1 Kohei Arai
https://textbookfull.com/product/advances-in-information-and-
communication-networks-proceedings-of-the-2018-future-of-
information-and-communication-conference-ficc-vol-1-kohei-arai/
https://textbookfull.com/product/intelligent-computing-
proceedings-of-the-2020-computing-conference-volume-3-kohei-arai/
https://textbookfull.com/product/intelligent-computing-
proceedings-of-the-2020-computing-conference-volume-2-kohei-arai/
https://textbookfull.com/product/advances-in-computer-vision-
proceedings-of-the-2019-computer-vision-conference-cvc-
volume-1-kohei-arai/
https://textbookfull.com/product/advances-in-information-and-
communication-networks-proceedings-of-the-2018-future-of-
information-and-communication-conference-ficc-vol-2-kohei-arai/
Advances in Intelligent Systems and Computing 880
Kohei Arai
Rahul Bhatia
Supriya Kapoor Editors
Proceedings
of the Future
Technologies
Conference (FTC)
2018
Volume 1
Advances in Intelligent Systems and Computing
Volume 880
Series editor
Janusz Kacprzyk, Polish Academy of Sciences, Warsaw, Poland
e-mail: kacprzyk@ibspan.waw.pl
The series “Advances in Intelligent Systems and Computing” contains publications on theory,
applications, and design methods of Intelligent Systems and Intelligent Computing. Virtually all
disciplines such as engineering, natural sciences, computer and information science, ICT, economics,
business, e-commerce, environment, healthcare, life science are covered. The list of topics spans all the
areas of modern intelligent systems and computing such as: computational intelligence, soft computing
including neural networks, fuzzy systems, evolutionary computing and the fusion of these paradigms,
social intelligence, ambient intelligence, computational neuroscience, artificial life, virtual worlds and
society, cognitive science and systems, Perception and Vision, DNA and immune based systems,
self-organizing and adaptive systems, e-Learning and teaching, human-centered and human-centric
computing, recommender systems, intelligent control, robotics and mechatronics including
human-machine teaming, knowledge-based paradigms, learning paradigms, machine ethics, intelligent
data analysis, knowledge management, intelligent agents, intelligent decision making and support,
intelligent network security, trust management, interactive entertainment, Web intelligence and multimedia.
The publications within “Advances in Intelligent Systems and Computing” are primarily proceedings
of important conferences, symposia and congresses. They cover significant recent developments in the
field, both of a foundational and applicable character. An important characteristic feature of the series is
the short publication time and world-wide distribution. This permits a rapid and broad dissemination of
research results.
Advisory Board
Chairman
Nikhil R. Pal, Indian Statistical Institute, Kolkata, India
e-mail: nikhil@isical.ac.in
Members
Rafael Bello Perez, Universidad Central “Marta Abreu” de Las Villas, Santa Clara, Cuba
e-mail: rbellop@uclv.edu.cu
Emilio S. Corchado, University of Salamanca, Salamanca, Spain
e-mail: escorchado@usal.es
Hani Hagras, University of Essex, Colchester, UK
e-mail: hani@essex.ac.uk
László T. Kóczy, Széchenyi István University, Győr, Hungary
e-mail: koczy@sze.hu
Vladik Kreinovich, University of Texas at El Paso, El Paso, USA
e-mail: vladik@utep.edu
Chin-Teng Lin, National Chiao Tung University, Hsinchu, Taiwan
e-mail: ctlin@mail.nctu.edu.tw
Jie Lu, University of Technology, Sydney, Australia
e-mail: Jie.Lu@uts.edu.au
Patricia Melin, Tijuana Institute of Technology, Tijuana, Mexico
e-mail: epmelin@hafsamx.org
Nadia Nedjah, State University of Rio de Janeiro, Rio de Janeiro, Brazil
e-mail: nadia@eng.uerj.br
Ngoc Thanh Nguyen, Wroclaw University of Technology, Wroclaw, Poland
e-mail: Ngoc-Thanh.Nguyen@pwr.edu.pl
Jun Wang, The Chinese University of Hong Kong, Shatin, Hong Kong
e-mail: jwang@mae.cuhk.edu.hk
Supriya Kapoor
Editors
123
Editors
Kohei Arai Supriya Kapoor
Saga University The Science and Information
Saga, Japan (SAI) Organization
Bradford, UK
Rahul Bhatia
The Science and Information
(SAI) Organization
Bradford, West Yorkshire, UK
This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Editor’s Preface
Future Technologies Conference (FTC) 2018 was held on November 13–14, 2018,
in Vancouver at the Marriott Pinnacle Downtown Hotel, with sweeping views
of the coastal mountains, Coal Harbour, and Vancouver’s city skyline. The city of
Vancouver is considered as one of the most beautiful cities in the world.
With great privilege, we present the Proceedings of FTC 2018 in two volumes to
the readers. We hope that you will find it useful, exciting, and inspiring. FTC 2018
aims at producing a bright picture and charming landscape for future technologies
by providing a platform to present the best of current systems’ research and
practice, emphasizing innovation and quantified experience. The ever-changing
scope and rapid development of future technologies create new problems and
questions, resulting in the real need for sharing brilliant ideas and stimulating good
awareness of this important research field.
Researchers, academics, and technologists from leading universities, research
firms, government agencies, and companies from 50+ countries presented the latest
research at the forefront of technology and computing. After the double-blind
review process, we finally selected 173 full papers including six poster papers to
publish.
We would like to express our gratitude and appreciation to all of the reviewers
who helped us maintain the high quality of manuscripts included in this conference
proceedings. We would also like to extend our thanks to the members of the
organizing team for their hard work. We are tremendously grateful for the contri-
butions and support received from authors, participants, keynote speakers, program
committee members, session chairs, organizing committee members, steering
committee members, and others in their various roles. Their valuable support,
suggestions, dedicated commitment, and hard work have made FTC 2018 a suc-
cess. Finally, we would like to thank the conference’s sponsors and partners:
Western Digital, IBM Research, and Nature Electronics.
We believe this event will help further disseminate new ideas and inspire more
international collaborations.
v
vi Editor’s Preface
We hope that all the participants of FTC 2018 had a wonderful and fruitful time
at the conference and that our overseas guests enjoyed their sojourn in Vancouver!
Kind Regards,
Kohei Arai
Contents
vii
viii Contents
1 Introduction
Brain Computer-Interfaces (BCI) [3,7,19] is commonly used for the development
of systems that can improve the quality of life of people who have some physical
constraint which limits the capacity of that person (visual, auditory or motor).
In this way, a BCI system should minimize the subject’s disability by assisting
in the task that the subject could perform alone. An example of this is the [10],
a system in which a subject who has speech impairment, focuses on an array of
letters on a monitor, and through the visual stimuli generated, the BCI system
can classify which the letter the subject is looking at and displaying it.
A BCI system can also aid in the decision-making of healthy subjects. There
are situations that can be considered risky, for example, braking a vehicle while
driving when you see a red traffic light or a car headlight flashing ahead. In such
situations, a BCI system can assist the driver if the decision taken by him is
c Springer Nature Switzerland AG 2019
K. Arai et al. (Eds.): FTC 2018, AISC 880, pp. 1–18, 2019.
https://doi.org/10.1007/978-3-030-02686-8_1
2 R. Hübner et al.
not the correct one. With this premise, we are developing a work to investigate
the SSVEP paradigm (Steady-State Visually Evoked Potential) [13–15] used to
determine which target with flicker frequency an subject is focused, which can be
recognized with an electroencephalography (EEG) equipment. In order for the
BCI system to make the right decision, it is necessary that the different events
are being presented at different flicker frequencies.
In order to conduct this research, was built simulations that reproduce tech-
niques that use SSVEP, because when this concept of decision-making is applied
to the real world, such situations can not be played the same way using the
traditional SSVEP paradigm as bright targets do not present a scintillation fre-
quency that can be classified by the BCI system, in addition to endangering the
life of the experiment subjects. In this context, the objective of this paper is to
present an empirical study of the techniques used for the processing of SSVEP
signals, aiming the development of a SSVEP-BCI system to aid in decision mak-
ing in situations close to the real world. The reason is that real bright targets
do not have a flicker frequency that can be classified by the BCI system, beside
putting at risk the lives of the subjects of the experiment. In this context, the
objective of this paper is to present an empirical study of the techniques used
for the processing of SSVEP signals, aiming the development of a SSVEP-BCI
system to assist in decision-making in situations close to the real world. For this,
we have built a prototype of traffic lights with Light Emitting Diode (LED) to
create decision-making situations.
To fulfill this objective, a set of experiments based on the SSVEP paradigm
was reproduced using a public database, with the intention of evaluating the pro-
gramming methods. We also constructed databases with EEG signal acquisition
to be evaluated with a prototype using LED-based traffic lights, in which they
generate the necessary visual evocation for experimentation. Finally, we investi-
gated different SSVEP signal stimulation strategies, making the prototype traffic
lights constructed have a behavior closer to reality without the visualization of
traditional flicker frequencies of the SSVEP paradigm.
This paper is divided as follows. Section 2 presents a brief grounding for
the SSVEP paradigm. Section 3 presents some related works. Section 4 presents
experiments with public database and with the constructed prototype, using the
traditional model SSVEP. Section 5 presents BCI system directions for evaluat-
ing decision-making at traffic lights, using the SSVEP paradigm in non-flickering
targets. Finally, Sect. 6 presents the conclusion.
2 SSVEP-BCI Background
The BCI paradigms determine what and how the subject must behave to produce
certain known patterns that can be interpreted by a BCI system. The subject
must generally be subjected to a calibration equipment and a training before
the experiment. The configuration of the physical environment, positioning of
the electrodes and the software set are directly associated with the paradigm
used. The paradigms currently used in a BCI system are: Selective attention
and Motor Imagery [18]. In this paper we focus on Selective Attention.
Towards in SSVEP-BCI Systems for Assistance in Decision-Making 3
A BCI experiment based on the SSVEP paradigm is related to how the stimuli
are presented to the subject and how the signals obtained through the EEG
equipment are processed. We present the processing steps of the SSVEP signal.
Feature Extraction. This step performs a search for the features that best
describes the expected properties of the input signal. Such characteristics can be
obtained using: the signal waveform analyzed in the time domain; Components
of subject frequencies in the frequency domain; Power density spectrum; Time
frequency analysis (i.e. Short-Time Fourier Transform - STFT), Autoregressive
Models, etc. [11].
In SSVEP-BCI systems, methods for extracting features based on the spec-
tral information presented in the EEG signal. In a given set of evoked frequencies,
the Power Spectral Density (PSD) calculation can extract from the signal, the
information of interest to be classified. The main methods used for SSVEP fre-
quency density analysis are: Filter Bank, Spectrogram, Weltch Method [2] and
Multitaper Method [16]. In this work was used the Multitaper Method that can
be applied by the tool MNE-Python1 .
1
http://martinos.org/mne.
Towards in SSVEP-BCI Systems for Assistance in Decision-Making 5
3 Related Works
The main works that contributed to the development of this paper are presented
below.
In Development of an ssvep-based BCI spelling system adopting
a qwerty-style LED keyboard [12] a speller system was developed in the
QWERTY model using 30 LEDs representing each key of keyboard, flickering at
different frequencies. This method allows the individual to select a character
without the need for multiple steps as in traditional BCI speller systems. It was
possible to obtain wide frequency resolution, strictly recognizing for example a
flickering stimulus of 0.1 Hz. The experiments were performed with ten healthy
subjects, in which five participated in an offline experiment and five in an online
experiment. 68 English words were used for the evaluations. In the offline results,
accuracy of 76.67% and 72.33% was obtained for viewing angles 40 and 30 degrees
respectively. The online results were better because the best angle and the best
combination of electrodes were used (Oz and O2 in system 10–20), obtaining
accuracy regarding the amount of time participants took to recognize each char-
acter: 5 s (84.69%), 6 s (86.17%) and 7 s (89.53%). From this work it was possible
to obtain important information about the distance and positioning angle of the
LEDs for a better result, besides the best electrodepositions for it.
In A novel stimulation method for multi-class SSVEP-BCI using
intermodulation frequencies [4] a method was developed using different inter-
modulation frequencies for SSVEP-BCIs with flickering targets at the same fre-
quency of 15 Hz. The set up allowing a greater number of targets. The authors
encoded nine target objects on an LCD screen, in which quadratic forms were
arranged in a 3 × 3 matrix. The modulation frequency for each target was gener-
ated by color characteristics (C), alternating the frames in green, red and gray,
luminance characteristic (L), alternating frames with a difference of 20 cd/m− 2
and the mixture of the two (CL) forming three approaches. As a result, the
average accuracy for the online assessment of the three approaches was 85%,
with the mixture of the two (CL) being the highest obtained of 96.41%. This
work presents alternatives in the SSVEP paradigm, which it implies to recognize
different targets flickering in the same frequency.
In the work Towards an optimization of stimulus parameters for
brain-computer interfaces based on steady state visual evoked poten-
tials [5] the influence of several characteristics of the SSVEP visual stimulus of
the SSVEP signal is presented. Five characteristics were evaluated for the tar-
gets: size, distance, color, shape and presence of a fixation point in the middle
of each flickering object. The distance between the stimulation targets and the
presence or absence of the fixation point had no significant effect on the results,
since the color and size of the flickering target played an important role in the
SSVEP response. Experiments were performed with 5 subjects and four stim-
uli were presented on the monitor screen with different flickering frequencies. A
group of LEDs was added adjacent to each object shown on the screen, respon-
sible for randomly generating the imposed luminance. The spectral responses
are larger for white, followed by yellow, red, green, and blue color. About the
6 R. Hübner et al.
4 Preliminary Experiments
This section presents two experimental sets that are the basis for our investiga-
tion. The two sets are divided as follows:
1. Development of codes for the evaluation of a public SSVEP-BCI database;
and
2. Construction of a prototype using traffic lights with LEDs as flickering targets.
Initially, we demonstrate the results of codes produced as part of this work,
to evaluate a public database. After the evaluation of the experiment, a second
experimental set was performed to evaluate a database produced by us, using
a prototype with traffic lights constructed with LEDs, in which LEDs perform
traditional SSVEP stimuli, based on flickering targets frequencies. By analyzing
these results in addition to investigating new methods linked to SSVEP-BCI
systems, it will be possible to develop a new BCI system for decision-making
with non-flickering targets using the same physical components of the second
experimental set. The proposal resulting from this research is in Sect. 5.
In all experiments was used the tool MNE-Python [9], which makes up a set
of libraries written in the Python programming language for the purpose of
analyzing EEG and MEG data. The library also used was Scikit Learn2 for
routines based on Computational Intelligence, also written in Python.
2
http://scikit-learn.org.
Towards in SSVEP-BCI Systems for Assistance in Decision-Making 7
Description of the Public Database AVI SSVEP. The base has measured
EEG data from healthy subjects, being exposed in flickering targets to obtain
SSVEP responses. Data were recorded using three electrodes (Oz, Fpz e Pz) posi-
tioned according to the 10–20 system. The data obtained from the electrode Oz
is the only ones recorded in the database. The electrode Fpz was used as refer-
ence and the electrode PZ for ground. An LCD monitor was used for stimulus
generation BenQ XL2420T with refresh rate at 120 Hz. The EEG equipment used
was the g.USBamp which has a sampling rate of 512 Hz and gold-plated elec-
trodes moistened with electrolytic gel. Subjects had to concentrate during the
experiment on targets of 2.89 cm2 on the monitor screen, seated at a distance of
60 cm from it.
Two types of experiments were performed to compose this database. The
first was performed with a single target (ST) to verify the existence of the VEP
signal. Four subjects were used, each submitted to a single session, focusing on
a single target for thirty seconds, four times. The frequencies chosen in each test
were random, but they were the same for each subject. The second experiment
was performed with multiple targets (MT), adding seven targets at different
frequencies. Five subjects were used in two sessions, focusing on multiple targets
for sixteen seconds, ten times. In each trial the subject focused on one of the
flickering targets reported and the sequence reported was also random but the
same for the five subjects.
Loading and Data Preparation. The codes developed for ST analysis were
necessary because it has a single target, taking into account our main research
at traffic lights, only one light will be lit at a time. The MT data were also
analyzed because there is a greater variation of samples and thus it is possible
to construct and evaluate a greater combination of strategies.
In the ST data, each subject performed only one session with four trials, but
since there are twenty-seven trials in each session, the training and test data
could be divided into different proportions in the same session, so that 33% of
the samples (9 samples) were used for the training, while 67% of the samples
(18 samples) were used for testing. In the MT data, the training and test data
of the classifier are divided into different sessions, because there are few samples
available, adding ten tests each session, but each subject performed two sessions.
In this way, the second session of each subject was used with ten samples for the
training of the classifier and the first session with the same subject for the tests.
3
http://www.setzner.com/avi-ssvep-dataset/.
8 R. Hübner et al.
Fig. 1. General flow of execution of the experiments presenting the algorithms used in
each step.
Results. In the analysis of the results with the ST data, three combinations of
data were used for the training and test, since each subject performed the same
experimental sequence three times. Thus, the first training section was used for
the classification model and the second and third for testing, and the other two
possible combinations to testing three different possibilities.
The best frequency range for the feature extraction was to use a standard
deviation equal to 0.3 (based on an exhaustive execution), that is, if the feature
extraction was performed around a frequency of 6 Hz, the range frequency was
from 5.7 to 6.3 Hz.
Figure 2a presents the bar plot with the results of the experiment with the
ST data. The best result was with subject 4, which the accuracy for the three
sessions was 100%. But the worst result was with subject 3 using the first session
as a test, which an accuracy of 14% was obtained. The overall mean accuracy of
all subjects was 70.75%.
Towards in SSVEP-BCI Systems for Assistance in Decision-Making 9
Fig. 2. Results of the experiment with the ST data from the AVI database.
The PSD charts were analyzed to determine the low results presented by
subject 3. In the first session, the target evoked a signal of 6.0 Hz, but the
PSD is higher around 12.0 Hz. This result implies both the poor training of the
classifier and the use of these data for testing, resulting in low accuracy.
Figure 2b presents a PSD of the first session performed by subject 4, in
which it obtained the highest accuracy (100%). It can be observed that in both
figures, the PSD is the highest around the evoked frequency and the rest of the
frequencies have low values. These data have good classifier training and also
result in good accuracy if used for the test.
In the results with the MT experiment, it was considered that the second
session of each subject would be better used for classifier training. The best
frequency range for the feature extraction was also with the standard deviation
equal to 0.3.
Figure 3a shows the bar plot with the results of the MT experiment. Most of
the results were better using the second session with the exception of subject 2.
The best result was with subject 4 and 5, in which the accuracy was 100% for
the two cases using the training with the second session. The worst result was
with subject 3 using both the first session and training as the second one, in
which an accuracy of 50% and 60% respectively was obtained. The overall mean
accuracy of all subjects was 84%.
The PSD graphics were analyzed to determine the low results presented by
subject 3. Figure 3b presents the PSD of the first session performed by this
subject. A signal of 9.3 Hz was evoked, but the PSD is larger around 6.5 Hz.
The tests performed with the experimental base of [24] demonstrated that it
is possible to use the codes developed by our work to evaluate an SSVEP-BCI
system.
10 R. Hübner et al.
Fig. 3. Results of the experiment with the MT data from the AVI database.
In this experimental stage, the construction of our database for the evaluation
of the prototype using traffic lights with flickering LEDs was started, as well as
testing the functioning of the EEG equipment used.
Experimental Procedures. To simulate the traffic light with the LEDs with
flickering frequencies, a code was developed for the micro controller that allows
to specify the frequencies of each LED. In the case of a conventional SSVEP-BCI
experiment, it is desirable for multiple targets to flick at different frequencies, so
Eq. 1 was applied in the Arduino code, where the interval I is the time between
the LED activations by frequency division f desired by a unit, adding the division
by 2 to disregard the half cycle of the LED on/off, multiplying by 1000 to
calculate the time in milliseconds, and finally subtracting which is the delay
of loop of code running on hardware. This delay was calculated using an LDR
light sensor connected to an Arduino in which the sensor was pointed at the
LED lit at different frequencies and the read sensor data sent to the computer
for analysis by a graph as a function of time. It has been found that this delay
varies from 1 to 2 ms, so the average of this value (1.5 ms) has been assigned
to .
I = [1 / f ] / 2 ∗ 1000 − (1)
The following frequencies for each LED have been configured: red = 8 Hz,
yellow = 10 Hz e green = 12 Hz. Non-multiple frequencies were chosen from
each other, which prevents an overlapping phenomena from occurring in the
spectrogram, causing signal magnitude to be high around the multiples of the
invoked frequency.
Figure 5 shows a flowchart of the experimentation detailing the softwares
and hardware used, as well as the communication model made between them.
4
https://www.arduino.cc/.
5
http://openbci.com.
6
https://github.com/OpenBCI/Ultracortex/tree/master/Mark 3.
12 R. Hübner et al.
Obtaining the EEG signal by means of OpenBCI board is performed with the
software OpenBCI GUI v27 . This software sends the signal obtained using the
interface Lab Streaming Layer8 (LSL) in the form of streaming to a code writ-
ten in Python to receive the EEG signal and writes it to a file FIF (tool file
extension MNE) along with the markers received by the micro controller serial
port. Such markers are the time indications that denote the moment each light
in the traffic light was lit.
This stage of the experiments was performed with only one subject, since the
objective was to test the correct functioning of the EEG equipment and verify if
the prototype is enough to evoke a good signal SSVEP. The following protocol
for the realization of the sessions was adopted:
7
https://github.com/OpenBCI/OpenBCI GUI.
8
https://github.com/sccn/labstreaminglayer.
Towards in SSVEP-BCI Systems for Assistance in Decision-Making 13
Results. A code was developed with some modifications related to that used
in the experimental set 1. In this experiment we added the CAR space filter
(Common Average Reference), taking as reference the channels Oz, O2, PO4 and
PO7, as they were the channels with the highest VEP response, in addition to
the FIR filters (Hamming window) at the cut-off frequencies of 5 Hz and 50 Hz
and a filter notch in the frequencies of 60 Hz and 120 Hz.
The training and test data used to classify, were divided into 30% and 70%
portions respectively, performing a cross-validation in which the initial 30% were
used (six first trials) and the remainder for testing, from the second to the seventh
training trials and so on until completing fifteen different combinations.
14 R. Hübner et al.
The best frequency range for the feature extraction was with the standard
deviation equal to 1.0. This value was found using an exhaustive execution with
the 30% of the first triages used for classifier training SVM).
Figure 7 shows the graph with the results of experiment 2 using cross-
validation. The best result was with the 9th piece of data used for the training
of the classifier, in which the accuracy was 100%. The worst results were with
the 8th and 14th portions of data used for the classifier training, in which an
accuracy of 78% was obtained in both cases. The overall mean accuracy for all
cross-evaluation was 86%.
Figure 8 shows a PSDs of the session performed with stimuli in the frequencies
of 8, 10, and 12 Hz, in which it obtained the highest accuracy (100%). It can be
observed that in both figures the PSD is the highest around the evoked frequency
and the rest of the frequencies have low values. These data have good classifier
training and also result in good accuracy if used for the test.
Another random document with
no related content on Scribd:
Albert Mildenberg (1878–1918)
Benjamin Lambord (1879–1915)
Heniot Lévy (Polish) (1879)
Eastwood Lane (?)
Arthur Shepherd (1880)
Noble Kreider (1880 ?)
F. Morris Class (1881–1926)
Gena Branscombe (1881)
R. Nathaniel Dett (1882)
John Powell (1882)
Percy Grainger (Australian) (1882)
Charles Tomlinson Griffes (1884–1920)
Emerson Whithorne (1884)
Louis Gruenberg (1884)
Cecil Burleigh (1885)
George F. Boyle (Australian) (1886)
Walter Morse Rummel (1887)
Fanny Charles Dillon (20th Century)
Edward Royce (1886)
Marion Bauer (1887)
Leslie Loth (1888)
A. Walter Kramer (1890)
Frederick Jacobi (1891)
Rosalie Housman (1892)
Charles Haubiel (1892)
Elliot Griffis (1892)
Mana Zucca (1893)
Albert Stoessel (1894)
David W. Guion (1895)
Leo Ornstein (1895)
George Antheil (20th Century)
Henry Cowell (20th Century)
Richard Hammond (1896)
Aaron Copland (1898)
Some Writers of Song
Czecho-Slovakian
Hungarian
French
Belgian
Gustav Huberti (1843–1910)
Jan Blockx (1851–1912)
Edgar Tinel (1854–1912)
Sylvain Dupuis (1856)
Paul Gilson (1865)
Louis Mortelmans (1868)
Guillaume Lekeu (1870–1894)
Victor Vreuls (1876)
Dutch
Swiss
Russian
Polish
Finnish
Italian
Spanish
English
American
Italian
Emilio del Cavalieri (1550?–1602)
Domenico Mazzocchi (1590–1650)
Luigi Rossi (1598–1653)
Giacomo Carissimi (1605–1674)
Giovanni Paolo Colonna (1627–1695)
Carola Pallavicini (1630–1688)
Alessandro Stradella (1645–1681)
Alessandro Scarlatti (1659–1725)
Francesco Pistocchi (1659–1726)
Jocopo Perti (1661–1756)
Antonio Caldara (1678–1763)
Niccolo Porpora (1686–1767)
Leonardo Leo (1694–1746)
Nicola Jommelli (1714–1774)
Felice de Giardini (1716–1796)
Pietro Guglielmi (1727–1804)
Antonio Sacchini (1734–1786)
Giovanni Paisiello (1741–1816)
Domenico Cimarosa (1749–1801)
Antonio Salieri (1750–1825)
Nicola Zingarelli (1752–1837)
Gioachino Rossini (1792–1868)
Gaetana Donizetti (1797–1848)
Giuseppe Verdi (1813–1901)
M. Enrico Bossi (1861–1925)
Giovanni Tebaldini (1864)
Giovanni Gianetti (1869)
Alfredo d’Ambrogio (1871–1915)
Lorenzo Perosi (1872)
Ermanno Wolf-Ferrari (1876)
Franco Alfano (1877)
German
Hungarian
Czecho-Slovakian
Russian
Anton Rubinstein (1829–1894)
Polish
English
French
François Lesueur (1760–1837)
Félicien David (1810–1876)
Charles Gounod (1818–1893)
César Franck (Belgian) (1822–1890)
Camille Saint-Saëns (1835–1921)
Théodore Dubois (1837–1924)
Vincent d’Indy (1851)
Claude Achille Debussy (1862–1918)
Gabriel Pierné (1863)
Charles Silver (1868)
Henri Rabaud (1873)
Reynaldo Hahn (Venezuelan) (1874)
André Caplet (1878–1925)
Arthur Honegger (1892)
Belgian
Spanish
American