Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

THE INFLUENCE OF COGNITION AND EMOTION IN PIANISTIC

PERFORMANCE

*Marcia Kazue Kodama Higuchi, #José Eduardo Fornari Novo Júnior, * João Pereira Leite
*Department of Neuroscience and Behavior Science, Faculty of Medicine of Ribeirão Preto of São Paulo
University
# Interdisciplinary Nucleus for Sound Communication (NICS), University of Campinas (UNICAMP)

ABSTRACT apud Juslin et al 2006 p. 80)“. However, the importance of the


performer’s emotions during execution is not well understood.
Studies on musical learning have demonstrated a strong
Juslin and Vastfjall (2008) have proposed six different
correlation between musicians' cognitive and affective attentions
mechanisms that can permeate emotional answers in listeners
with variations of musical aspects of their performance. However,
through music. One of the mechanisms proposed, the emotional
it is still unclear how these two types of attentions are expressed in
contagion, could explain how the interpreter's emotion can
musical features of pianistic performance. The present study aims
influence musical expressiveness. During the processing of an
to verify if cognitive or affective pianistic performances can be
emotion-induced stimulus, the nervous system activates a
differentiated by acoustic descriptors. Recordings of nine pianists
sequence of reactions, preparing the body to a specific reaction,
playing the same piece of music focusing only on cognitive
according to each circumstance. The reactions derived from
aspects of musical structure, or paying attention solely on affective
emotions influence all human activities, such as: body posture,
aspects, were analyzed by two musical descriptors. They are
skin blushing, facial expression, gesticulation, voice intonation
computational models that retrieve the musical features of Pulse
and expression, consequently influencing the way the instrument
Clarity (PC) and Articulation (AR). Results showed that all
is played. These reactions would result in variations in agogic,
affective performances had less AR (more legatos, less staccatos)
dynamic, timbre, articulation and other types of musical features.
and less PC (less metric precision or more agogics) when
The listeners would perceive these emotional expressions and
compared to the cognitive performances. For affective and
mimic them internally “by means of periphery feedbacks from
cognitive performances, AR mean values respectively are: 1006.1
muscles, or a more direct activation of the relevant areas of
and 1522.7, and PC mean values are: 2.2224 and 2.8296. Both p
emotional representations in the brain, leading to an induction of
value were < 0.001. Previous studies indicate that legatos and
the same emotion (Juslin and Västfjäll 2008 p.565).” Therefore,
agogics (i.e. lack of pulsation) are important features related to
the emotional contagion theory indicates that the interpreter’s
expressiveness, indicating that affective performances have more
emotion, at the moment of execution, can have a fundamental role
expressive features than cognitive ones. Therefore, our findings
in his or her musical expressiveness.
suggest that it is possible to use a computational model built with
descriptors, such as AR and PC, to aid the automatic classification On the other hand, excessive technique is frequently seen as an
of affective and cognitive performances. expressiveness inhibitor, in the pianistic domain (Richerme 1996).
In musical learning, the antagonism between technique and
1. INTRODUCTION expressiveness, or, in other words, the reciprocal modulation
between cognition and affection, has been cited (Gainza 1988;
It is known that music can induce many corporeal reactions França 2000; Higuchi 2003; Sloboda 2004; Higuchi 2008). In this
(Blood and Zatorre 2001; Lundqvist, Carlsson et al. 2009; context, it is frequently reported the existence of “expressive”
Krumhansl 1997) as well as emotional ones (Blood and Zatorre students that play inattentively and students with technical skills,
2001; Brown, Martinez et al. 2004; Menon and Levitin 2005). which are cool and inexpressive in their performances (Gainza
Nevertheless, it is still unclear which are the underlying 1988; França 2000; Higuchi 2003; Sloboda 2004).
mechanisms in emotions caused by music. In the musical
performance domain, the idea that expressivity is related to Higuchi (2003) has observed that students considered that
compositional system is well diffused, but it is also acknowledged affection generally inhibit expressiveness, when they work on
that musical interpretation exerts an important role in this process their technique. This inhibition would be related to the level of
(Higuchi and Leite 2007). Many musicians relate the interpreter’s difficulty and the necessity of focusing the attention on the
emotion during the execution to the expressiveness. In realization of a task. In other words, the bigger is the difficulty and
spontaneous answers, 135 musicians of 3 countries (England, the necessity of attentional focalization, the bigger is the
Scotland and Sweden) defined expressivity as “communicating expressiveness inhibition.
emotions” and “playing with the feeling” (Lindström 2003 et al
These data corroborate observations indicating that students who
practice piano decoding and controlling each note they play,
generally have difficulty to express musically (Higuchi 2007, stimulus given by the sad scenes. The volunteers were advised not
2003). On the other hand, the “expressive” students, that generally to play this piece outside the training sessions.
play inattentively, normally have strong emotional involvement
with music (Higuchi 2003). The volunteers then went through one hour recording session,
where they were instructed to play the music in two different
However, these characteristics have been observed in students that manners. In the first task (denominated cognitive performance),
presented difficulties in the music learning, generating doubts if they were instructed to play focusing all their attention to each
these characteristics were due to the student’s difficulties or due to note they were playing (naming mentally, controlling and
the emotional contagion phenomenon. The present study aims to monitoring each note they played). In the second task
compare pianists with a high level of skill, executing the same (denominated affective performance), they were instructed to self-
piece of music, in two different ways. First, with attention focused induce the feeling of sadness, following these procedures: 1)
on the control of each note they play, and then with the attention Watching the emotional stimulus. 2) Simulate the physical
focused on emotional aspects. With this approach, we expect to reactions of a melancholic expression. 3) Remembering an event
verify the influence of attention to the cognitive or affective of their lives that brought a very strong feeling of sadness. After
aspects in the pianistic expressive performance. this induction, the volunteers were instructed to play the repertory
focalizing all their attention to the feeling that the music
We consider that, if the emotional contagion theory is correct, transmitted.
performances with attention directed to the affective aspects of
music will present more expressive features than performances 2.3 Material
with attention directed to the cognitive ones.
The emotional stimulus confection was produced applying
2. METHOD selected photos from IAPAS (International Affective Picture
System) with Trauer, (repertoire of this current study) as
2.1 Participants background music played by pianist João Carlos Martins.

This study compared two musical performances from nine pianists The recording was performed in a Piano Steinway & Sons, series
ranging from 20 to 36 years of age. The pianists are graduate or D. The audio recordings were acquired with 3 Microphones
undergraduate students of music from Art Institute of São Paulo Neumman KM 184, 2 Microphones DPA 4006, Cables Canaire, a
State University (UNESP) and Alcântara Machado Art Faculty Recording table Mackie 32/8.
(FAAM). The piece of music selected for this experiment was an
adaptation of the 32 initial bars of Trauer, in F Major; from the 12 2.4 Analyzes
piano pieces for four hands, to big and little children, opus 85,
composed by Robert Schumann. The volunteers played the primo In order to analyze each pianistic performance, we used
part and were accompanied by Ms. Higuchi, the principal computational models that were designed to retrieve specific
researcher of this present study, who played the part secondo. musical features from audio files, somehow resembling specific
cognitive processes of the human audition, when identifying and
2.2 Procedure focusing to specific musical features. The literature of Music
Information Retrieval describes two classes of acoustic
These nine volunteers passed through 4 or 5 training sessions of descriptors: low-level and higher-level. Low-level descriptors are
one hour each, before recording, where it was explained and the ones independent of context, such as the psychoacoustic
applied all the processes of music memorization until they were features (e.g. loudness, pitch, timbre) and higher-level ones are the
able to perform all the tasks as they were solicited. the features with with musical context, such as: tonality, rhythmic
pulsation, or melodic line. In (Fornari, J., Eerola, T., 2009) it was
In the first session, all the memorization process was approached. presented eight computational models designed to predict
The volunteers were asked to know by heart all the succession of contextual music aspects. They were created with the objective of
notes of the repertory explicitly (i.e. to be able to name all notes), predicting specific higher-level musical cognition features
as well as implicitly (to be able of playing it automatically). associated with emotional arousal by music stimuli, as it was one
The second session was dedicated to the afective aspects. An of the major goals of “Braintuning” (www.braintuning.fi) the
emotional stimulus was prepared to induce their affection. Sad project in which this development was done. The result of each
scenes were played and watched by the pianists so they could model is a time series with the predictions of one specific music
associate the piece to the emotional state of sadness. feature. Initially, we tested the audio files from the pianistic
performances with eight descriptors, as previously designed. They
In the third, fourth and fifth sessions, the volunteers were are named: Pulse Clarity (PC), Key Clarity (KC), Harmonic
instructed to play the music in two different manners. In the first Complexity (HC), Articulation (AT), Repetition (RP), Mode
task, they were instructed to play the music thinking of each note (MD), Event Density (ED) and Brightness (BR).
they were playing (when they played with two hands, they were
instructed to think of the right hand notes only). In the second Pulse Clarity (PC) is a descriptor that measures the sensation of
task, they were instructed to play thinking of the emotional pulse in music. Pulse is here seen as related to agogics; a
fluctuation of musical periodicity that is perceptible as “beatings”,
in a sub-tonal frequency (below 20Hz), therefore, perceived not as Repetition (RP) is a descriptor that accounts for the presence of
tone (frequency domain) but as pulse (time domain). This is due to repeating patterns in musical excerpts. These patterns can be:
the fact that a pulse whose frequency is faster than 20Hz is melodic, harmonic or rhythmic. This is done by measuring the
supposed to be perceived by the human auditory system not as a similarity of hopped time-frames along the audio file, tracking
rhythmic stimuli but as a tonal stimuli. In the same way, if the repeating similarities happening within a perceptibly time delay
pulse is to slow (let say, 1 pulse per minute) it is very alike that (around 1Hz to 10Hz). Its scale ranges continuously from zero
listeners would not identify the pulse queue, but perceive each (without noticeable repetition within the musical excerpt) to one
pulse as an independent stimuli. The pulse predicted by PC can be (clear presence of repeating musical patterns).
of any musical nature (melodic, harmonic or rhythmic) as long as
it is perceived by listeners as an ordered stimuli in the time Mode (MD) is a descriptor that refers to the major, or Ionian,
domain. The measuring scale of this descriptor is continuous, scale; one of the eight modes of the diatonic musical scale. The
going from zero (no sensation of musical pulse) to one (clear most identifiable ones are: major (Ionian) and minor scales (such
sensation of musical pulse), independent of its frequency (there is as the Aeolian). They are distinguished by the presence of a tonal
no distinction between “slower” or “faster” pulses). center associated to intervals of major / minor thirds in the
harmonic and melodic structure. In the case of our descriptor, MD
Key Clarity (KC) is a descriptor that measures the sensation of is a computational model that retrieves from musical audio file an
tonality, or musical tonal center. This is related to the sensation of overall output that continuously ranges from zero (minor mode) to
how much tonal an excerpt of music (a sequence of notes) is one (major mode).
perceived by listeners, disregarding its specific tonality, but only
focusing on how clear its perception is. For instance, if the Event Density (ED) is a descriptor that refers to the overall
sequence of notes: C, D, E, F, G, A, B, C is played in ascending amount of identifiable, yet simultaneous (melodic, harmonic or
order, there will be, in most people, a clear identification of a tonal rhythmic) events in a musical excerpt. Its scale ranges
center; the C major scale. In opposition, another sequence such as: continuously from zero (only one identifiable musical event) to
C, F#, F, B, A#, A would not be easily related to any tonal center. one (maximum amount of simultaneous events that an average
KC prediction ranges from zero (atonal) to one (tonal). listener can distinguish).
Intermediate regions, neighboring the middle of its scale tend to Brightness (BR) is a descriptor that retrieves the synesthetic
refer to musical excerpts with sudden tonal changes, or dubious sensation of musical brightness. It is somewhat intuitive to realize
tonalities. that this aspect is related to the audio spectral centroid, as the
Harmonic Complexity (HC) is a descriptor that measures the presence of higher frequencies accounts for the sensation of a
sensation of complexity conveyed by musical harmony. In brighter sound. However other aspects can also influence its
communication theory, musical complexity is related to entropy, perception, such as: attack, articulation, or the unbalancing or
which can be seen as the amount of disorder of a system, or how lacking of partials in the frequency spectrum. Its measurement
stochastic is a signal. However, here we are interested in goes continuously from zero (opaque or “muffled) to one (bright).
measuring the “auditory perception” of entropy, instead of After analyzing the time series retrieved from the recordings of the
acoustic entropy of a musical sound. For example, in acoustical pianists with these eight descriptors, we concluded that the
terms, white-noise is a very complex sound, yet its auditory descriptors of Articulation (AR) and Pulse Clarity (PC) were the
perception is of a rather unchanging stimuli. The challenge here is most sensitive for the musical distinctions that set apart affective
finding out the cognitive identification of harmonic complexity, and cognitive pianistic performances. This seems to be intuitively
leaving (for now) the melodic and rhythmic complexity. The expected as AR measures the variations in the melodic line and PC
measuring scale of this descriptor is continuous and goes from measures the presence of musical pulse; two features known to be
zero (imperceptible harmonic complexity) to one (clear related to expressiveness in music. AR predictions range from
identification of harmonic complexity). legato (near zero) to staccato (positive values). PC predictions
Articulation (AR), as described in music theory, usually refers to range from “ad-libitum” (near zero) to “pulsating” (positive
the fingering in which a melodic line can be played. There are two values). A thorough explanation on each of these descriptors, as
opposite ways of playing a melody: staccato and legato. Staccato well as on the computational models behind the retrieval of these
means “detached” and refers to the way of playing a melody musical aspects, can be found in (Fornari, J., Eerola, T., 2008) and
inserting a small pause between the notes so the overall melody (Lartillot, O., et al. 2008).
sounds detached. In opposition, legato means “tied together” and The following Figures show an example of the time series for the
refers to the playing of a melody avoiding noticeable pauses prediction of PC and AR, for the performances of the first pianist
between notes, so the overall melody sounds linked, or connected. (1); in the cognitive (c) and affective (e) performance. The title of
This descriptor attempts to retrieve the articulation information each graphic describes which measurement is shown. For instance,
from musical audio files, by detecting frequent pauses or sudden 1cAR refers to the audio file of Pianist 1, in the cognitive attention
drops of intensity in the music audio files, and attributing to it an recording, for the prediction (time series) given by the
overall rank that continuously ranges from zero (legato) to one computational model of ARticulation.
(staccato).
Figure 1. Example of the predictions of AR and PC descriptors,
for the cognitive (1c) and affective (1e) performance of one
pianist.

With these data, we calculated the correlation coefficient (R)


between the same pianist's cognitive (c) and affective (e)
performances, for the predictions of AR and PC, for all nine
pianists who participated in this experiment. We also analyzed the
difference between cognitive and affective performances using
paired t test, as shown below.
3. RESULTS 4. DISCUSSION
The results of the correlation coefficient R between the same In this study, we have documented quantitative differences
pianist's cognitive and affective performance, for all nine pianists, between music features of cognitive and affective performances.
are depicted in the following Table. The main findings of our study suggest that affective performances
have more agogics and legatos when compared to the cognitive
ones. According to (Juslin 2005), sad expression is associated with
Table 1. Correlation Coefficient between performances. legato articulation, lower sound levels, larger time variations, and
Pianist 1 2 3 4 5 6 7 8 9 flatter micro intonations. Emotional Contagion Theory speculates
that the performer's affectivity could be involved with mechanisms
AR of his or her execution of specific movements to express emotion
0.1 0.04 0.12 0.09 0.20 0.05 0.19 0.09 0.02
throughout pianistic performance. Therefore, the present data
PC corroborate with the Emotional Contagion Theory once that, 8 out
0.21 0.09 0.25 0.26 0.28 0.1 0.15 0.27 0.14 of 9 volunteers reported that they felt strongly the emotion of
sadness after having been inducted to this feeling, by the sad
scenes. All volunteers reported that they succeeded in directing
The results of paired t-tests, for cognitive and affective their attention to this feeling, when they played their affective
performances, for both descriptors are presented in Table 2: performances.

The investigation of emotional contagion theory is important,


because it is the base of many strategies, such as musical
Table 2. Paired t-test for all performances. modeling (imitation), metaphor and felt emotions, used by music
Mean St. Deviation teachers to develop the expressiveness of their students
performances (Juslin, Karlsson et al. 2006). This also explains
AR cognitive why music expressiveness is considered by some scholars to be
1522.7 49.91
“instinctive” (Sloboda and Davidson 2003).
AR affective
1006.1 78.08 If this hypothesis is true, it would be also possible to understand
P value < 0.001 why two important features related to the expressiveness are
suppressed in the cognitive performances. Studies using functional
PC cognitive Magnetic Resonance Imaging (fMRI) (Pallesen, Brattico et al.
2.8296 0.1312
2005; Blair, Smith et al. 2007) have demonstrated that cognitive
PC affective activity can diminish the activation of cerebral areas involved with
2.2224 0.1352 emotion arousal. If the cognition inhibit emotion is important for
P value < 0.001 the expressiveness, playing piano with all attention focused on
cognitive aspects may constrain expressiveness. The idea that
attention to the cognitive aspects of performance may constrain
the expressiveness is corroborated by music therapists. According
We found significant differences (p < 0.001) in both features here to (Sloboda 2005), “emotionally repressed individuals can
analyzed. According to these results, there were less AR (more discover strong and intuitive emotional expressivity within
legatos, less staccatos) and less PC (less metric precision or themselves, when offered a music performance medium in which
meter) in all the affective performances of each pianist, as they come to realize that there are no preordained standards of
compared to their respective cognitive ones. correctness.”
The legato articulation and time variations (agogics) are the In the musical environment, it is well-known that technical and
features that seem to be related to the musical expressiveness. expressive skills interact with, and depend upon, one another
More legatos and less metric precision seem to indicate that (Sloboda 2000). Effective expressive performance requires very
affective performances convey more expressive features than fine and subtle variations in performance parameters, in this way,
cognitive ones. the capacity of communicating expressive intention depends on
It is important to mention that, after the induction of sadness (as the performer's level of technical mastery. We have found
described in 2.2 e 2.3), we asked the volunteers if they had this evidences among pianists, with high level of technique and
emotion aroused. Eight out of nine volunteers commented that expressiveness, that emotion may support expressiveness and
they had felt sadness, and all of volunteers reported that they had cognition may constrain it. We might speculate that these
no trouble in directing their attention to this feeling when they characteristics are not due to technical or expressive difficulties.
played their affective performances. The inexpressiveness found in students that practice piano
decoding and controlling each note they play (Higuchi 2003,
2007), may be diminished if these students modify their
attentional focus, and “play with feeling”.
5. ACKNOWLEGMENTS Higuchi, M. K. K. (2007). Dificuldades no aprendizado
pianística e a neuropsicologia XVII Congresso da
We would like to thank all pianists participants, in special to the ANPPOM 2007 São Paulo
pianist João Carlos Martins that genteelly collaborated with our
study. We want to acknowledge the support of Alcântara Higuchi, M. K. K. (2008). Tocando com concentração e
Machado Arts Faculty, the Art Institute of UNESP and the Arone emoção. São Paulo, Editora Som
Pianos Workshop that genteelly let us use their classrooms with Juslin P. N., J. Karlsson, et al. (2006). "Play it again with
pianos, to carry out the training sessions. We also would like to feeling: Computer feedback in musical communication of
thank to the colleagues of epilepsy investigations laboratory for emotions." Journal of Experimental Psychology-Applied
their continuous support in many situations, and the 12(2): 79-95.
Interdisciplinary Nucleus of Sound Communication (NICS) of
UNICAMP, for carrying on the analysis of the pianists recordings Juslin P. N., Sloboda, John A. (2005). Music and emotion:
by the computer models of music descriptors. This research is theory and research Oxford, Oxford University Press.
financially supported by FAPESP (Fundação de Amparo a
Pesquisa do Estado de São Paulo). Juslin P. N. and D. Vastfjall (2008). Emotional responses to
music: the need to consider underlying mechanisms.
Behav Brain Sci 31(5): 559-75; discussion 575-621

6. REFERENCES Krumhansl, C. L. ( 1997). An Exploratory Study of Musical


Emotions and Psychophysiology. Canadian Journal of
Blood, A. J. and R. J. Zatorre (2001). Intensely pleasurable Experimental Psychology, 51(4): 336-352.
responses to music correlate with activity in brain regions
Lartillot, O., et al. (2008). An Integrated Framework for Onset
implicated in reward and emotion. Proc Natl Acad Sci U S
Detection, Tempo Estimation and Pulse Clarity Prediction.
A 98(20): 11818-23
Ninth International Conference on Music Information
Blair, K. S., B. W. Smith, et al. (2007). Modulation of emotion Retrieval - ISMIR 2008. Philadelphia, PA USA.
by cognition and cognition by emotion. Neuroimage
Lundqvist, L. O., F. Carlsson, et al. (2009). Emotional
35(1): 430-40
responses to music: experience, expression, and
Brown, S., M. J. Martinez, et al. (2004). Passive music physiology. Psychology of Music 37(1): 61-90.
listening spontaneously engages limbic and paralimbic
Menon, V. and D. J. Levitin (2005). The rewards of music
systems. Neuroreport 15(13): 2033-7.
listening: response and physiological connectivity of the
Fornari, J., Eerola, T., (2008). Estimating the Perception of mesolimbic system. Neuroimage 28(1): 175-84
Complexity in Musical Harmony. The 10th International
Pallesen, K. J., E. Brattico, et al. (2005). Emotion processing of
Conference on Music Perception and Cognition – ICMPC
major, minor, and dissonant chords: a functional magnetic
10. Sapporo, Japan.
resonance imaging study. Ann N Y Acad Sci 1060: 450-3.
Fornari, J., Eerola, T. (2009) The Pursuit of Happiness in
Richerme, C. (1996). A técnica pianística - uma abordagem
Music: Retrieving Valence with Contextual Music
científica. São João da Boa Vista, Editora Air.
Descriptors." Book Chapter at Computer Music Modeling
and Retrieval. Genesis of Meaning in Sound and Music". Sloboda, J. and J. Davidson (2003). The young performing
Publisher: Springer Berlin / Heidelberg. ISBN: 978-3- musician Musical beginnings: origins and development of
642-02517-4. musical competence I. Deliège, ed. Sloboda, John ed. .
Oxfornd; New York, Oxford University Press.
França, C. C. (2000). Performance instrumental e educação
musical: a relação entre a compreensão musical e a Sloboda, J. A. (2000). Individual differences in music
técnica. Per musi 1: 52-62 performance. Trends in Cognitive Sciences 4(10): 397-
403.
Gainza, V. H. d. (1988). Estudos de psicologia musical São
Paulo Summus. Sloboda, J. A. (2004). The musical mind: the cognitive
psychology of music, Oxford : Oxford University Press.
Higuchi M. and J. Leite (2007). Rigidez métrica e
expressividade na interpretação musical: uma teoria Sloboda, J. A. (2005). Exploring the musical mind: cognition,
neuropsicológica. Opus Revista Eletrônica da ANPPOM emotion, ability, function. Oxford; New York, Oxford
13(2). University Press.
Higuchi, M. K. K. (2003). Técnica e Expressividade:
diversidade e complementaridade no aprendizado
pianístico. Music Department. São Paulo, São Paulo
University. Master

You might also like