Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

Neuro-scientific Studies of Musical Emotions

Emotion identification has recently been considered as a key element in advanced human-
computer interaction. In seventeenth century, Descartes considered emotion to mediate between
stimulus and response. Though self report measures form the cornerstone of music-emotion
research, recently other physiological measures have gained importance in identifying high
arousal levels. High arousal is related to respiratory activity (Nyklìček et al., 1997) and
perspiratory activity (Rickard, 2004) while differences in emotional valence may be
differentiated by the activation of various facial muscles (Witvliet & Vrana, 2007), including the
startle reflex (Roy et al., 2008). Stemmler (2004) reports that anger and fear, despite being
matched in terms of valence and arousal, could be differentiated by a combination of
cardiovascular and respiratory measures. Nakahara et.al (2009) found that expressive conditions
of piano playing produced significantly higher levels of HR and low-frequency component of
HRV, as well as a lower level of its high-frequency component. Khalfa et.al (2002) used Skin
Conductance Response (SCR) as a measure of musical emotion and found that SCRs can be
evoked and modulated by musical emotional arousal, but are not sensitive to emotional clarity.
Facial behaviours potentially indicative of emotion can also be assessed with facial EMG, it has
been used for assessing the valence of a person’s emotional state (Bradley & Lang, 2000; Larsen
et.al, 2003), but are generally viewed as limited in understanding discrete emotional reactions

Most of the researchers have focused on facial expression recognition and speech signal analysis
for assessing emotions (Bung & Furui, 2000; Cowie et.al, 2001). Mostly techniques to
understand the emotions are mostly based on a single modality such as Positron Emission
Tomography (PET), functional Magnetic Resonance Imaging (fMRI), Electroencephalography
(EEG) or static face image or videos. There are many difficulties in automatic human emotion
identification. Since emotion expresses natural instinctive state of mind, the outward expression
may be voluntarily controlled as per the environment or the circumstances. A person when
feeling angry may shout at his child, but may suppress this in front of his boss. Straightforward
connection of facial actions with neural signals or with emotions may not be correct. EEG signal
changes according to the emotion state. On the other hand changes that occur in EEG signal
cannot be voluntarily controlled and hence become a better indicator of emotions. EEG can
detect changes in brain activity over milliseconds, which is excellent considering an action
potential takes ~0.5–130 ms to propagate across a single neuron, depending on the type of
neuron. Nevertheless, EEG measures the brain's electrical activity directly, while fMRI and PET
record changes in blood flow or metabolic activity, which are indirect markers of brain electrical
activity. EEG can be considered as the most studied potentially non-invasive brain activity
detector system in computational emotional systems, mainly due to its fine temporal resolution,
ease of use or portability, been not as invasive as the other alternatives and relatively low cost.
However, EEG alone does not record indirect brain electrical activity markers, such as changes
in blood flow or metabolic activity. Therefore, combining EEG with other non-invasive
biometric information detection systems is a promising way to improve emotion detection.
Music is frequently mentioned as a language of emotions and is broadly recognized as one of the
powerful tool for arousing emotions and feelings in human beings. Appreciation and affection to
music is a universal attribute in all cultures and human races. There is a close relation between
human emotions and music because the parts in human brain that sense music and perceive
emotions are close to each other (Day, Lin, Huang, & Chuang, 2009; Eerola and Vuoskoski,
2010; Pouyanfar and Sameti, 2014).

In the past few years, different methods have been proposed for automatic human emotion
recognition which relies mainly on brain signals generated from central nervous system and
observed using electroencephalograph (EEG), electrocorticographies (ECoG) and functional
magnetic resonance imaging (fMRI). Among different modalities of brain signals, EEG is
considered as the best choice to record information due to its unique characteristics in response
to the human emotional states (Hadjidimitriou and Hadjileontiadis, 2013; Lokannavar, Lahane,
Gangurde, & Chidre, 2015; Petrantonakis & Leontios, 2014; Saeed, Anwar, Majid, & Bhatti,
2015; Thammasan, Fukui, Moriyama, & Numao, 2014). Emotion estimation from human brain
activity recorded using EEG is quite effective, since these signals are generated from the limbic
system that is strongly responsible for cognition activities. EEG signals are recorded from scalp
using electrodes, and there are mainly two types of EEG recordings: (i) mono-polar (ii) bipolar.
Mono-polar recording measures the voltage difference between active electrodes on the scalp
and a reference electrode. On the other hand, bipolar electrodes provide the voltage difference
between two active electrodes. The EEG signals comprise of numerous frequency bands with
ranges from 0Hz to 80 Hz. EEG signal can also be categorized into five major frequency
rhythms: Delta, Theta, Alpha, Beta and Gamma. All these show different mental conditions of
the subject (Subha, Joseph, Acharya, & Lim, 2010).

Recently, EEG signals have been used by brain researchers and human computer experts to
recognize human emotions (Subha et al., 2010). Shahabi et al. (Shahabi & Moghimi, 2016)
proposed a noninvasive assessment tool for the automatic detection of musical emotions. They
investigated how brain is associated with joyful, melancholic, and neutral music. In (Bajaj and
Pachori, 2014), four different human emotions are classified from EEG signals using wavelet
based features. Three emotions are classified by Support Vector Machines (SVM) on the basis of
time-frequency features (Chanel, Kierkels, Soleymani, & Pun, 2009). Soleymani et al
(Soleymani et al., 2012b) suggested a user-independent multimodal emotion recognition system
using EEG technique, which classifies three emotional states in response to videos clips.
Petrantonakis et al (Petrantonakis and Hadjileontiadis, 2010) classified four different human
emotions using four different machine learning techniques from brain signals using Higher Order
Crossings (HOC) and Hybrid Adaptive Filtering (HAF). EEG signals are classified in response
to music and a relationship between music and emotions is also established using brain maps
(Lin et al., 2010). Hadjidimitriou et al (Hadjidimitriou & Hadjileontiadis, 2012) proposed an
EEG-based music liking and disliking scheme using different time frequency analysis techniques
including Hilbert Huang Spectrum (HHS) and Zhao Atlas Marks (ZAM). They compare results
on the basis of different machine learning algorithms such as SVM and k-nearest neighbours (K-
NN). In (Hadjidimitriou and Hadjileontiadis, 2013), music appraisal responses are classified
using EEG signals based on topographical maps and time-frequency distributions. Daly et al.
(Daly et al., 2014) presents some results in which subjects report their induced emotional
responses and found neural correlation between beta and gamma bands based on EEG signals.

Aesthetics and Neuro-aesthetics of Music


Humans have engaged in artistic and aesthetic activities since the appearance of our species. Our
ancestors have decorated their bodies, tools, and utensils for over 100,000 years. The expression
of meaning using color, line, sound, rhythm, or movement, among other means, constitutes a
fundamental aspect of our species’ biological and cultural heritage. Art and aesthetics, therefore,
contribute to our species identity and distinguish it from its living and extinct relatives. Science
is faced with the challenge of explaining the natural foundations of such a unique trait, and the
way cultural processes nurture it into magnificent expressions, historically and ethnically unique.
How does the human brain bring about these sorts of behaviors? What neural processes underlie
the appreciation of painting, music, and dance? How does training modulate these processes?
How are they impaired by brain lesions and neurodegenerative diseases? How did such neural
underpinnings evolve? Are humans the only species capable of aesthetic appreciation, or are
other species endowed with the rudiments of this capacity?

Neuroaesthetics is a relatively recent field of research in which investigators’ general goal is to


answer these questions and also to understand the neural substrates of human aesthetic
appreciation. Neuroaesthetics can properly be viewed as a subfield of cognitive neuroscience,
given that it involves the study of human cognition and behavior using a combination of methods
from neuroscience and cognitive science, bringing together the cognitive and neural levels of
explanation (Churchland & Sejnowski, 1988; Gazzaniga, 1984). In the last decade or so, the field
of neuroaesthetics is finding its feet (Chatterjee, 2011) and developing the proper formal and
institutional mechanisms that characterize any in the scientific domain, as demonstrated in 2009,
the field’s first international conference (Nadal & Pearce, 2011) and the publishing of a Research
Topic on brain and art in the journal Frontiers in Human Neuroscience (Segev, Martínez, &
Zatorre, 2014), a special issue of the journal Psychology of Aesthetics, Creativity and the Arts
(Nadal & Skov, 2013), and several books on the neural foundations of aesthetic experience
(Chatterjee, 2014; Shimamura & Palmer, 2012; Skov & Vartanian, 2009; Zaidel, 2005).

Although research on neuroaesthetics originated from visual perception research and focused
mainly on aesthetic experiences of paintings, faces, and architecture, it recently expanded to
other domains such as music and literature (Marin, 2015). Brattico and Pearce (2013) summarize
the goal of neuroaesthetics of music as understanding neural mechanisms and structures involved
in cognitive and affective processes that generate emotions, judgements and preferences – three
responses that are all important for an aesthetic experience. Different properties of music are
thought to lead to an aesthetic experience; some arise from the music itself, some from
associations with previous experiences. Acoustic features like consonance and dissonance,
loudness, timbre, beat and tonality can increase or decrease tension (Koelsch, 2014) and evoke
emotions in the listener. A recent study (Proverbio, 2015) investigated the influence of the style
of music on physiological measures by comparing reactions to tonal and atonal music. Musical
experts chose instrumental classical tonaland atonal contemporary pieces that were played via
headphones to 50 participants with no musical training. Meanwhile heart rate and blood pressure
were measured and participants saw 300 faces that they later had to remember to ensure attention
during the whole data acquisition period. The researchers then calculated the influence of music
style (atonal vs. tonal) and of emotion (agitating vs. joyful vs. touching) on heart rate and blood
pressure. Heart rate was lower when participants were listening to atonal music in comparison to
tonal music. In addition, they had a higher systolic and diastolic blood pressure in response to
atonal music.

Koelsch (2014) performed a meta-analysis on music and emotion that summarized results from
different single studies with diverse methods: studies on pleasure; on emotional responses to
consonant and dissonant music, happy vs. sad music, joy vs. fear-evoking music; on musical
expectancy violations and on music-evoked tension. Koelsch showed that listening to music
leads to activation in core brain structures important for emotion: The amygdala processes
stimuli that convey social and affective information relevant for approach or withdrawal
behavior; joyful music elicits for example a higher activity in that region. Secondly, the striatum
is activated while listening to pleasant music reflecting its sensitivity to rewards.
Finally, hippocampal activity is associated with music-evoked tenderness, peacefulness, joy and
sadness as well as music-evoked positive emotions associated with stress reduction and are
related to social functions of music e.g. the formation and maintenance of social attachments.
Recent research began to investigate modulatory effects of the listener, the listening situation and
the properties of the music itself (Brattico & Pearce, 2013; Brattico, Bogertb & Jacobsen, 2013).
Brattico and Pearce (2013) underline that familiarity is known to effect neural responses to music
and that attention is a necessary condition to aesthetic experience as it enables the listener to
appreciate emotions and memories, make a judgment and decide on preferences. An interesting
piece of research in this domain is the role of musical expertise.

It is fascinating to compare the neural appraisal of people who only listen to music or are lay
musicians with those who practice for hours every day and are very skilled in playing
their instrument. Brattico et.al (2016) discriminated between two independent sets of emotions,
namely basic emotions (such as sadness and happiness) and aesthetic emotions (such as pleasure,
beauty and enjoyment). According to them, the independence between these two sets is best
illustrated by the so called “tragedy paradox” that sad music can induce enjoyment thus a
positive aesthetic experience. In their study, they not only tried to disentangle the neural
structures for the different sets of emotions but also investigated differences between the
reactions of laypersons and musical experts. 13 musicians and 16 non-musicians brought their
most-liked and most-disliked music with happy and sad character to the lab where researchers
selected 18 seconds of each piece. When presented with the self-chosen music undergoing fMRI,
their favored music activated in both groups the limbic and paralimbic system and the reward
circuit, both considered as crucial emotional and motivational structures. On the contrary
listening to happy music activated the auditory cortex thus a sensory area and this activity is seen
as reflecting the acoustic characteristics (e.g. major tonality) of happy versus sad music. These
different activation patterns show that different brain structures are associated with perceiving
music as happy or sad and with musical enjoyment (liking or disliking), thus the assumed
independence is also shown on the neural basis. In general, musicians showed increased activity
in somatomotor areas that are associated with body movements and the practice of an instrument
and showed stronger responses of limbic regions to their preferred music compared to non-
musicians.

Another recent study (Alluri et.al, 2015) on musical expertise and emotions focused on the three
nodes responsible for music-evoked emotions: the amygdala, the hippocampus and the nucleus
accumbens. The connectivity to other regions e.g. cerebellum, a region important for motor
control and visual and somatomotor regions was found to be more enhanced in musicians than in
non-musicians, a result that fits well with the study mentioned above. Expertise changes
functional connectivity and during listening, their knowledge and also their motor skills from
practicing an instrument increased the connectivity between regions processing musical emotions
and pleasure with motor areas. Interestingly the researchers used complete pieces of music and
not excerpts as it is usually done in studies on music and emotion. The use of complete pieces of
music more closely resembles the way we listen to music in our daily lives, and it provides a
great way to investigate our reactions to music.

Research on the neuroaesthetics of music is still a new field that needs to be expanded during the
next years, also by considering modulatory factors of the aesthetic experience. As Donald
Hodges stated in his book (Hodges, 2016) on neuroaesthetics of music: “As we move closer to
understanding how the brain processes musical emotions, musical beauty, aesthetic judgments,
and the like, we are also moving closer to the core of human musical experiences.”.
Interdisciplinary research is necessary to obtain a deeper understanding of how we experience
music – the relevance to also investigate the neural correlates of music and emotion becomes
apparent when considering possible applications. In the clinical context, different patient groups
could benefit from the therapeutic use of music – for example in the treatment of Alzheimer’s
disease, affective disorders and also chronic immune diseases (Koelsch, 2009). Last but not least,
music plays a very important role almost everyone’s life – and it is fascinating to learn how our
nervous system is equipped to let us experience strong emotions towards one of our most
preferred free time activities.

You might also like