Professional Documents
Culture Documents
Neural Systems For Recognition of Emotinal Prosody
Neural Systems For Recognition of Emotinal Prosody
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
Which brain regions are associated with recognition of emotional prosody? Are
these distinct from those for recognition of facial expression? These issues were
investigated by mapping the overlaps of co-registered lesions from 66 braindamaged participants as a function of their performance in rating basic emotions.
It was found that recognizing emotions from prosody draws on the right frontoparietal operculum, the bilateral frontal pole, and the left frontal operculum. Recognizing emotions from prosody and facial expressions draws on the right frontoparietal cortex, which may be important in reconstructing aspects of the emotion
signaled by the stimulus. Furthermore, there were regions in the left and right
temporal lobes that contributed disproportionately to recognition of emotion from
faces or prosody, respectively.
Ralph Adolphs, Hanna Damasio, and Daniel Tranel, Department of Neurology, University of Iowa College of Medicine.
This research was supported by a National Institute of
Neurological Disorders and Stroke Program Project Grant
to Antonio R. Damasio and by grants from the National
Institute of Mental Health, the Sloan Foundation, and the
EJLB Foundation to Ralph Adolphs. We thank Antonio
Damasio for helpful comments on earlier versions of the
article, Jeremy Nath for help with testing participants,
and Denise Krutzfeldt for help in scheduling their visits.
For more information on this topic, go to http://www.
medicine.uiowa.edu/adolphs.
Correspondence concerning this article should be addressed to Ralph Adolphs, Department of Neurology, University of Iowa College of Medicine, 200 Hawkins Drive,
Iowa City, Iowa 52242. E-mail: ralph-adolphs@uiowa.edu
23
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
24
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
25
Across functional imaging studies, there are consistent activations in right inferior frontal regions
(Buchanan et al., 2000; George et al., 1996; Imaizumi
et al., 1997), regions also implicated in working
memory for prosody (Rama et al., 2001), as well as
occasional reports of activation in the left middle
frontal gyrus (Imaizumi et al., 1997). Affective processing of auditory stimuli other than prosody has
been shown to engage more orbital regions of the
prefrontal cortex (Blood, Zatorre, Bermudez, &
Evans, 1999; Frey, Kostopoulos, & Petrides, 2000).
However, processing emotions draws not only on
cortex; it also appears to involve subcortical structures. A subcortical structure that has emerged as potentially important in emotional prosody recognition
is the basal ganglia. In a study involving 46 braindamaged participants, and using a lesion overlap
analysis similar in spirit to ours, Cancelliere and
Kertesz (1990) found evidence that damage to the
basal ganglia, when present in addition to cortical
damage, was often associated with impaired prosody
recognition. Two comments are important to make
regarding this study. First, unlike the patients used in
our study, all patients in the study by Cancelliere and
Kertesz (1990) were studied in the acute epoch, likely
producing a rather different constellation of impairments than would have been obtained had they been
studied chronically. Second, the patients in that study
were not screened for confusional or attentional impairments, a fact the authors acknowledged in their
discussion. The study also did not report basic background neuropsychological information, such as IQ or
audiogram screening for the patients, leaving open the
possibility that the results could have been influenced
by these confounds. Nonetheless, the study is suggestive of a role for the basal ganglia, in conjunction with
cortical regions, in processing emotional prosody, a
conclusion also supported by functional imaging studies (Morris et al., 1999).
Findings consistent with the above study have been
documented in a recent study by Breitenstein et al.
(1998), which included both patients with focal cortical lesions as well as patients with Parkinsons disease. The study found that advanced stages of Parkinsons disease and focal lesions in the right frontal
regions were the only pathologies that resulted in impaired recognition of emotional prosody; the authors
interpreted the findings as evidence for a more distributed system, comprising frontal and striatal circuits, that participated in processing emotional
prosody. It is thus conceivable that damage to the
basal ganglia, under certain circumstances (advanced
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
26
Parkinsonism or acute stroke) may result in an imbalance in the right frontal regions with which the basal
ganglia are connected; the resulting fronto-striatal
dysfunction may contribute to the impairments observed. However, this leaves open the possibility, also
suggested by other studies of Parkinsons disease
(Adolphs, Schul, & Tranel, 1997), that chronic and
relatively restricted lesions to the basal ganglia are not
sufficient, by themselves, to result in impaired recognition of emotion.
A second subcortical structure that may participate
in recognizing emotional prosody is the amygdala.
However, the evidence for such a role is actually
much more solid in the domain of facial emotion than
it is in the domain of prosody. Although some lesion
studies have reported impaired recognition of emotional prosody following bilateral amygdala damage
(Scott et al., 1997), others have failed to find such an
effect (Adolphs & Tranel, 1999; Anderson & Phelps,
1998). There are a few recent functional imaging
studies that have reported activation of the amygdala
when people listen to emotional auditory stimuli
(Morris et al., 1999; Phillips et al., 1998), but the topic
needs further investigation.
Taken together, then, the studies to date point to the
following conclusions. First, recognizing emotional
prosody draws on multiple structures distributed between both the left and right hemispheres; second, the
roles of these structures are not all equal but may be
most apparent in processing specific auditory features
that provide cues for recognizing the emotion; third,
despite the distributed nature of the processing, the
right hemisphere, and in particular the right inferior
frontal regions, appear to be the most critical component of the system, working together with more posterior regions in the right hemisphere, the left frontal
regions, and subcortical structures, all interconnected
by white matter.
been shown to contribute to recognizing basic emotions from facial expressions. Much of the data from
lesion studies has come from single or multiple case
studies, although a few studies have examined neuroanatomical information from groups of lesion patients. Of particular relevance to the present investigation, in a prior study (Adolphs et al., 2000), we
investigated the recognition of basic emotions from
human facial expressions in 108 individuals with focal brain damage. That study revealed that somatosensory-related cortices in the right hemisphere were
critical to recognize emotion from facial expressions,
a finding we interpreted as evidence that the recognition of emotion in others requires the reconstruction in
the perceiver of somatosensory representations that
simulate what the signaled emotion would feel like
(cf. the Discussion for more details on this topic).
That finding, however, left open a critical question
regarding generality: Would similar regions be involved when recognizing emotion from stimuli other
than facial expressions, such as emotional prosody?
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
broadly support the idea that the left and right hemispheres are differentially important in processing
emotion, but they are not unanimous in providing support for either the valence or the right hemisphere
hypotheses. The bulk of the data supports the idea that
the right hemisphere plays a disproportionate role in
perceiving emotions of negative valence, but a clear
asymmetry in perceiving emotions of positive valence
has not emerged.
It has been suggested that particular dimensions of
emotion, such as valence and arousal, might be processed by distinct brain systems. There is recent support for the hypothesis that the right hemisphere may
be especially important for recognizing the arousal
dimension in both prosodic (Peper & Irle, 1997a) and
facial (Peper & Irle, 1997b) expressions of emotion,
findings consistent with the right hemispheres demonstrated role in mediating arousal responses to a variety of emotional stimuli (Morrow, Vrtunski, Kim, &
Boller, 1981; Tranel & Damasio, 1994; Zoccolotti,
Scabini, & Violani, 1982).
27
Method
As reviewed above, earlier work steered theories in
the direction of a clear right-hemisphere lateralization
for processing emotional prosody, analogous to the
left-hemisphere specialization for propositional language (e.g., Ross, 1981), whereas later models have
argued for two key modifications of this view. First, it
appears that both the right and left hemispheres participate in processing prosody, although they likely
make different contributions. Second, it seems clear
that perception of prosody is not a monolithic process
but rather draws on a complex set of multiple cues
provided by the stimulus; moreover, the particular
cues a listener may use to make judgments about the
stimulus can vary depending on the nature of the
stimulus, the demands of the task, and perhaps even
the idiosyncratic strategies used by that person. The
upshot of these more recent developments is not to
discard the right hemisphere hypothesis in its entirety,
but to acknowledge that, whereas the right hemisphere may be, on many tasks, more critical than the
left in processing emotional prosody, a comprehensive account of how we judge emotion from prosodic
stimuli points toward a set of processes implemented
in a distributed, bihemispheric neural system. The
present study aims to explore such a system by elucidating some of its component structures. We limit
ourselves to a particular set of stimuli and a single
Participants
We tested 66 participants with focal brain damage
and 14 controls whose demographic and neuropsychological background data are given in Table 1. Of
the brain-damaged participants, 25 had lesions in the
left hemisphere, 26 in the right hemisphere, and 15
had bilateral lesions in homologous regions (either
bilateral prefrontal or bilateral occipital). A summary
of the sampling density in each brain region is given
in Table 2, indicating that we sampled the entire cortex but that some regions were sampled more densely
than others. We attempted to include primarily individuals with cortical damage, and we excluded those
who had mostly white-matter lesions. This criterion
also led to a relatively small number of individuals
with damage to the basal ganglia.
All brain-damaged participants were selected from
the patient registry of the Division of Cognitive Neuroscience and Behavioral Neurology at the University
of Iowa School of Medicine and had been fully characterized neuropsychologically (by Daniel Tranel; cf.
Tranel, 1996) and neuroanatomically (by Hanna
Damasio; cf. H. Damasio & Frank, 1992; Frank,
Damasio, & Grabowski, 1997). They were carefully
screened to avoid the inclusion of individuals with
impairments that might confound possible impaired
28
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
Table 1
Demographics and Background Neuropsychology
Variable
Bilateral
Left
Right
Normal
Gender (F/M)
Age (M/SD)
Education (M/SD)
Verbal IQ (M/SD)
Perform. IQ (M/SD)
Aphasia (M/SD)
Depression (M/SD)
Etiology (M/SD)
Onsettest (M/SD)
8/7
58/9
12/3
99/16
105/21
0.4/0.8
0.3/0.6
5tum/10cva
8.7/5
10/15
46/16
13/3
99/14
107/13
0.3/0.5
0.4/0.7
16cva/9tlob
10.0/5
11/15
47/16
14/2
100/12
95/15
0
0.3/0.4
18cva/8tlob
6.8/4
7/7
52/16
13/2
Note. Groups are classified according to lesion side: bilateral, left, right, or normal control. Given are
gender ratio (F female, M male); age; verbal and performance (Perform.) IQ from either the revised
or the third edition of the Wechsler Adult Intelligence Scale (Wechsler, 1981); aphasia, as a composite
measure of residual aphasia, which was evaluated by Daniel Tranel from standardized neuropsychological tasks (Tranel, 1996) on a scale of 0 (none) to 2 (severe); depression, as a composite measure of
depression, which was evaluated by Daniel Tranel from two neuropsychological tasks, the Beck Depression
Inventory (Beck, 1987) and the Minnesota Multiphasic Personality Inventory (Greene, 1980), on a scale of
0 (none) to 2 (severe); etiology of the lesion in terms of three classifications; tumor resection (tum), cerebrovascular accident (cva), or temporal lobectomy (tlob); and years between the onset of the lesion and the
testing date. Dashes indicate that normal participants were not assessed for IQ, aphasia, or depression.
No. of individuals
17
6
13
7
13
9
10
5
29
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
a happy surprise. We used the following four sentences: Men play football, There are trees in the
forest, This is my pencil, and People read books.
To verify that these sentences were indeed judged to
be semantically neutral, we asked 5 independent normal participants to rate written versions of these sentences, presented together with 30 other and more
emotional sentences, on a 4-point scale (0 entirely
neutral; 3 extremely emotional); all of the raters
gave ratings of 0 to each of these four sentences. We
chose a particular female speaker, on the basis of the
apparent quality of her prosody, after informal piloting with several of the staff in our department. The
female speaker was one of our staff unfamiliar with
any of the hypotheses of the study and had no formal
background in voice training. She was instructed to
produce sentences with the most clear and intense
emotion possible; although we discussed the intended
emotions with her at length, no particular emotional
scenarios were given to her to produce the stimuli.
After several practice sessions with the experimenters, she privately read aloud and recorded the four
above sentences in each of the five different emotional tones, in four separate recording sessions (thus
yielding four sets of 20 stimuli each; she also produced stimuli that were intended to signal disgust initially, but we omitted these from further inclusion
because they could not be recognized reliably). Final
stimuli were chosen by the experimenters from her
four sets on the basis of the intensity and clarity of the
emotion conveyed as well as the overall auditory
quality of the particular sample recorded. This yielded
a final total of 20 recorded sentences (the 4 sentences
the 5 emotions used), which were subsequently digitized at 22 kHz and normalized for average amplitude. The amplitude normalization was done to avoid
the possibility that participants could deduce the correct emotion simply by reasoning from perceived
loudness (see the Discussion for more details on this
issue). Sentences were played in randomized order to
Table 3
Normative Data for Stimuli: Frequencies With Which Stimuli Were Matched to a Given
Label by 10 Controls
Label chosen
Stimulus
Happy
Sad
Angry
Afraid
Surprised
Happy
Sad
Angry
Afraid
Surprised
0.95
0.00
0.00
0.03
0.33
0.00
0.88
0.18
0.00
0.00
0.00
0.00
0.83
0.00
0.00
0.00
0.13
0.00
0.70
0.00
0.05
0.00
0.00
0.28
0.68
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
30
Analyses of Data
Our approach required the analysis of neuroanatomical data pertaining to the sites of lesions as a
function of the task performances of a given individual. Essentially, we asked how lesion location
would systematically covary with task performance. It
should be noted that we do not make any formal
claims about impairment in this study. We do not
specify a particular cut-off performance relative to
normal performance as impaired; rather we simply
rank-ordered the scores from all brain-damaged individuals and used a median-split to contrast low performance scores with high performance scores to
extract possible systematic relations between performance and neuroanatomical distribution of lesion location. To avoid confusion, it should be kept in mind
that we use the terms worse or better as meaning
lower performance score or higher performance
score in relation to the rest of the brain-damaged
individuals. Thus, stating that damage to a certain
brain region resulted in worse performance means, in
our context, that individuals who had lesions in that
brain region gave lower performance scores than did
individuals who had lesions elsewhere.
Analysis of task performance. We obtained three
different types of data from our task: (1) participants
ratings of stimuli on the intended emotion label, (2)
the correlations of participants ratings across all eight
labels with the mean ratings given by controls, and (3)
derived accuracy measures obtained from the maximal
intensity rating given on a particular emotion label.
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
faces) (rank order on prosody) and reflects differential performance in recognizing emotions from
these two classes of stimuli. We also derived a subset
of participants who had low scores on both faces and
prosody by including only those participants who had
the lowest scores (were in the bottom partition, cf.
below) on both measures; this sample consisted of 13
participants.
Analysis of neuroanatomical data. Our approach
in analyzing the data was to group individuals into
partitions on the basis of their performance scores and
to then visualize the lesion overlaps of all individuals
within a given partition. The general approach of
mapping overlaps of lesions onto a common brain has
been used in prior studies of emotional prosody (e.g.,
Cancelliere & Kertesz, 1990). For most of the analyses shown in the figures, we used a median-split
analysis: a contrast of the 50% of individuals with the
lowest scores versus the 50% with the highest scores.
Note that these neuroanatomical analyses are entirely
internal to the sample of brain-damaged individuals,
rather than comparing brain-damaged to normal performance. Consequently, we do not make claims
about the absolute level of impairment but only about
the relative level of performance within the braindamaged group. The question of interest now was the
following: Might there be regions of the brain that,
when lesioned, were associated with low performance
scores more often than with high performance scores?
Such regions are encoded in red in the figures (see
figure captions for details in each case) and show that
there were more individuals with lesions in that region
who fell into the partition with low performance
scores than those who fell into the partition with high
performance scores. This same approach has been
used successfully in analyzing the neural systems for
recognizing emotion from facial expressions (Adolphs
et al., 2000), for naming actions (Tranel, Adolphs,
Damasio, & Damasio, 2001), for naming concrete entities (H. Damasio, Tranel, Adolphs, Grabowski, &
Damasio, in press), and for spatial memory (Barrash,
Damasio, Adolphs, & Tranel, 2000).
We obtained all images using a method called
MAP-3 (Frank et al., 1997) . Briefly, the lesion visible
on each brain-damaged individuals MR or CT scan
was manually transferred onto the corresponding sections of a single, normal, reference brain, and lesions
from multiple participants were summed to obtain lesion density at a given voxel. We divided our sample
of 66 participants into two groups: the 33 with the
lowest and the 33 with the highest mean performance
31
in recognizing emotion from prosody. This mediansplit analysis was undertaken for each of the derived
measures that are described in the analyses above. A
lesion density image was generated for each group,
and the two images were then subtracted from one
another. This analysis resulted in images that showed,
for all participants who had lesions at a given voxel,
the difference between the number of participants in
the bottom half of the partition and the number of
participants in the top half of the partition. The analysis revealed particular regions (hot-spots) within
which lesions systematically resulted in lower performance.
To confirm the reliability of the results obtained
from all partitions that used a median split, we repeated all our analyses, as in Figure 2c (shown later),
using only a subset of individuals at either extreme of
the performance range and omitting a group in the
middle (specifically, the 20 with the lowest scores
versus the 20 with the highest scores, omitting the
middle 26). These analyses all showed very similar
overall patterns, although in the case of partitions with
fewer individuals they showed somewhat less detail
and smaller absolute lesion differences because of the
smaller sample size (compare Figures 2a and 2c
[shown later]). We consequently decided to show in
the figures the neuroanatomical analyses that use the
entire sample of participants and a median split.
In some of our analyses (see Figure 1 [shown
later]), examination of our partitions showed that
there were participants in the bottom (lower scores)
partition who had bilateral lesions. To investigate further the hot-spots that could be attributed to unilateral
lesions versus those that might result from bilateral
lesions, we created separate partitions for participants
with unilateral lesions and for participants with bilateral lesions. We proceeded by first using the partition
of all 66 participants as described above. Within the
bottom and top partitions, we then separated participants with bilateral lesions and participants with unilateral lesions.
We carried out statistical analyses on some of the
neuroanatomical results shown in Figures 1 and 2
(shown later) to establish their significance. This is
not possible to do in a global fashion for every voxel
because it would require an extremely large number
of corrections for multiple comparisons. Following
our prior procedure (Adolphs et al., 2000), we calculated probabilities for a few voxels located at the centroid of regions of maximal lesion overlap within a
given region of interest. The probability of obtaining
a given density of lesions in the bottom partition,
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
32
Results
Brain-damaged individuals had lesions in locations
that were distributed throughout the brain, although,
as expected, some regions were sampled more
densely than others (Table 2). Notably, a disproportionately large number of lesions overlapped in temporal pole, primarily because of the inclusion of several individuals who had lesions resulting from the
same surgical procedure (temporal lobectomy). As
Table 1 shows, there were no differences in background neuropsychological performances among participants with lesions in different hemispheres, with
the exception of Performance IQ, where there was a
significant difference between participants with unilateral lesions in the left and right hemispheres, t(48)
3.1, p < .005 (cf. Table 1).
Normal individuals chose the correct emotion label
a high proportion of the time when asked to choose
the one of the five words that best fit the stimulus (our
stimulus validation task; see Table 3). There was
more of a spread when they were asked to rate intensity and given additional labels on which to provide
those ratings (derived measure [1] from our experimental task; see Table 4), but these ratings also con-
Table 4
Normative Data for Stimuli: Mean Ratings Given by 14 Normal Participants to Each Class of Stimuli on Each of the
Emotion Labels That Were Rated
Label rated
Stimulus
Awake
Happy
Sleepy
Sad
Disgusted
Angry
Scared
Surprised
Happy
Sad
Angry
Afraid
Surprised
3.66
2.44
3.62
3.46
3.78
3.23
0.31
0.82
1.30
2.41
0.60
2.23
0.44
0.79
0.46
0.59
3.82
1.59
1.77
0.69
0.63
1.63
3.18
0.90
0.63
0.42
1.07
3.51
0.95
0.38
0.42
1.68
0.90
2.75
0.60
1.62
0.86
0.94
2.54
3.69
33
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
Happy
Sad
Angry
Group
SD
SD
14 NC
66 BD
3.20
3.40
0.70
0.80
3.80
3.90
0.50
1.00
66 BD
40 BD
0.90
0.93
0.20
0.05
0.74
0.72
Afraid
Surprised
SD
SD
SD
0.70
1.20
2.80
2.80
1.20
1.40
3.70
4.00
0.80
1.30
Correlations
0.14
0.79
0.17
0.18
0.80
0.13
0.61
0.62
0.24
0.16
0.91
0.91
0.17
0.09
Ratings
3.50
3.40
Note. Shown are data concerning Figure 1 for normal and brain-damaged individuals (top) and data
concerning Figure 2 as correlations with normal ratings (bottom) from the entire participant sample of 66
as well as from the 40 used for Figure 2c. NC normal control participants; BD brain-damaged
participants.
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
34
Figure 1. Lesion Distribution From Emotion Ratings. The neuroanatomical sites of lesions were analyzed as a function of
task performance from the ratings that were given to each emotion. MAP-3 images are shown in which the color at each voxel
represents the difference in the overlaps of lesions from those individuals with performances in the bottom half of the
distribution, compared with those with performances in the top half. The scale at the bottom shows how color corresponds to
number of lesions: Blue colors correspond to a larger number of lesions from individuals with performances in the top half;
red colors correspond to a larger number of lesions from individuals with performances in the bottom half. The performance
measure used to partition the group was the rating of the intensity of the intended emotion conveyed by prosodic stimuli (e.g.,
rating how happy a happy voice sounded).
a. Data from all 66 brain-damaged participants, including those with unilateral and those with bilateral lesions. White arrows
in the top image indicate the voxels in the right premotor and the right polar cortex for which the probabilities reported in the
Results were calculated.
b. Data from 52 of the 66 who had exclusively unilateral lesions.
c. Data from 14 of the 66 who had bilateral lesions. Note that b and c were generated as subsets of the same partition used
to derive the data in a.
all the emotion labels. Table 6 shows such data, broken down for each of the groups that went into Figure
1a: the bottom (lowest ratings) and the top (highest
ratings) partitions that were made on the basis of rat-
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
35
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
36
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
37
Figure 2 (Continued)
the lowest mean accuracy scores (yellows and reds). This partition divided the group into
those with mean accuracy scores .07 and those with mean accuracy scores 0.8. The
middle 16 participants were excluded because they did not differ on their mean accuracy
scores.
c. MAP-3 difference overlap images of mean correlation scores, as in (a), but only for the 20
individuals at either extreme of the performance range. MAP-3 difference overlap images are
shown for the 20 with the highest correlations (most normal ratings; blue colors), compared
with the 20 with lowest correlations (most abnormal ratings; red colors). Data from the
middle 26 participants were not used in this analysis, so that we could obtain a contrast
between extremes of performance. This analysis corroborates the results obtained when data
from all 66 participants were analyzed. The bottom row shows coronal sections at the
locations indicated by the white lines in the three-dimensional images above.
d. Histograms of the number of individuals who had a given score, indicating the lower (red)
and upper (blue) partitions. On the left are the data from accuracy scores; the gray bar
indicates individuals with scores in the middle who were omitted from the analysis. On the
right are the data from correlations, showing the partitions into the bottom and top 20 (dark
red and blue) and into the bottom and top 33 (all reds and all blues). The purple bar in the
middle includes some individuals who were in the bottom-33 partition, and some who were
in the top-33 partition (not separated in this histogram because of the scale). In each graph,
the y-axis encodes number of individuals with a given score, and the x-axis gives the score.
38
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
Stimulus
Happy
Sad
Angry
Afraid
Surprised
Happy
Sad
Angry
Afraid
Surprised
1.2
0.9
1.2
1.8
3.2
Happy
Sad
Angry
Afraid
Surprised
4.1
0.3
1.4
1.9
3.4
Top 33 participants
0.6
0.3
4.2
1.0
1.8
3.7
2.5
0.8
0.7
0.3
0.2
2.2
0.5
3.1
0.4
2.1
0.8
0.8
2.6
4.3
Happy
Sad
Angry
Afraid
Surprised
1.3
0.8
1.0
1.9
3.5
Happy
Sad
Angry
Afraid
Surprised
3.7
0.5
1.4
2.0
3.3
Top 33 participants
0.9
0.7
4.4
1.3
2.1
3.5
2.6
1.1
0.9
0.6
0.5
2.0
0.8
2.9
0.7
2.0
0.9
1.1
2.7
4.1
Happy
Sad
Angry
Afraid
Surprised
1.3
0.8
0.9
1.9
3.4
Happy
Sad
Angry
Afraid
Surprised
3.7
0.5
1.4
1.9
3.2
Top 33 participants
0.9
0.6
4.0
1.1
1.8
4.2
2.5
1.0
0.9
0.7
0.5
2.1
0.6
3.4
0.7
2.0
1.0
1.2
2.7
4.3
Happy
Sad
Angry
Afraid
Surprised
1.3
0.7
0.9
1.8
3.3
Happy
Sad
Angry
Afraid
Surprised
3.7
0.5
1.3
1.8
3.2
Top 33 participants
0.8
0.4
4.0
1.2
2.0
3.9
2.5
1.0
0.8
0.5
1.9
1.0
1.2
2.6
4.2
0.4
2.2
0.7
3.8
0.7
39
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
Stimulus
Happy
Sad
Angry
Afraid
Surprised
Happy
Sad
Angry
Afraid
Surprised
1.3
0.9
1.1
1.9
3.0
Happy
Sad
Angry
Afraid
Surprised
3.8
0.4
1.3
1.8
3.2
Top 33 participants
0.8
0.5
4.1
1.1
1.7
3.9
2.5
0.9
0.8
0.5
2.0
0.9
1.0
2.7
4.8
0.3
2.1
0.6
3.1
0.5
Note. Details of ratings, given on labels for different emotions, are shown for each of the partitions used
to generate the images shown in Figure 1a. The mean rating given to a label is shown in columns, and
the mean rating given to a stimulus class is given in rows. The particular rating on which the partition
was decided is shown in bold in each table.
Table 7
Ratings of Brain-Damaged Participants Compared With Normal Ratings (Z-Scores)
Side
Left (M )
Right (M )
Bilateral (M )
Participant no.
Happy
Sad
Angry
Afraid
Surprised
318
500
770
1584
1815
1983
2021
0.80
0.36
0.55
0.76
0.35
1.13
0.39
1.50
0.35
0.76
0.19
0.17
0.07
2.28
0.83
0.34
3.05
0.83
2.08
1.31
0.25
0.25
1.03
2.13
1.39
0.02
0.33
0.33
3.89
3.19
0.35
0.16
1.02
0.86
2.37
1.08
0.22
0.00
1.73
0.86
0.70
0.06
0.54
0.54
1.00
1.30
0.85
1.30
4.24
1.78
Note. Shown are ratings of the intensity of the emotion on its intended label in units of SD above or
below the normal control mean. Means are shown for participants with left, right, or bilateral frontal
damage, and individual data are shown for the 7 participants with bilateral frontal damage shown in
Figure 1c.
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
40
scores for facial expressions, and damage to left anterior temporal lobe was associated with high correlation scores for prosody (see Figure 3a), raising the
possibility that the left and right temporal poles might
be differentially involved in recognizing emotion
from the face or the voice.
A within-subject comparison of performance on
both tasks confirmed this impression and offered additional findings. First (see Figure 3b), an analysis of
the overlaps of lesions of those participants who had
low scores (i.e., who were in the bottom 50% partition) from both prosody and facial expression (N
13) revealed a maximal overlap of lesions in the bilateral frontal operculum and the right frontoparietal
operculum. Second, damage to the left temporal pole
resulted in worse recognition of emotion from facial
expressions than from prosody, whereas damage to
the right temporal pole resulted in worse recognition
of emotion from prosody than from facial expression
(see Figure 3c). Both of these findings are consistent
with the data from prosody and from facial expression
analyzed separately (see Figure 3a). However, it is
important to point out that neither right nor left temporal pole damage in fact led to a particularly low
performance score, when compared with all other parFigure 3. Comparing Prosody and Facial Expression
Recognition. Data used to partition the groups were correlation scores for 46 individuals who participated in
rating emotions from both prosody and faces. MAP-3 images are shown in which color corresponds to the number
of participants with lesions at a given location, as indicated on the respective color scales. In all cases, the dependent measure was the mean correlation score across all stimuli.
a. Data from prosody and from facial expressions shown
individually. Color represents the difference between the
number of lesions from individuals in the bottom 50% compared with those in the top 50% (23 in each partition).
b. Location of lesions associated with compromised recognition of emotion from both prosody and faces. Shown are
the overlaps of lesions from individuals who were in the
bottom partitions both for faces and for prosody (N 13).
c. Location of lesions associated with differential performance on faces and prosody. To obtain these data, we first
calculated the difference in performances on prosody and on
faces (see Method). Overlaps of lesions from all individuals
in one of the two partitions of this derived difference measure are shown (22 were better on prosody than they were
on faces; 24 were better on faces than they were on
prosody). Note that this analysis yields the relative performances on faces as compared with those on prosody and
does not necessarily imply that participants were impaired
on either class of stimulus.
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
Individuals
Left
Right
.100
.060
.056
.027
.010
.008
.002
.017
.031
.097
.160
41
Discussion
The data from Table 5 show that brain damage
results in a larger variance in performance, compared
with controls, despite an essentially normal mean performance. This is especially so for negative emotions,
and it is a finding that is very general in neuropsychology. The reasons for it are that not all brain damage is equal: Whereas damage in some regions will
leave performance on a particular task unaffected,
damage elsewhere may result in severely impaired
performance. The aim of the present study concerned
the extent to which the variance seen in the performances given by brain-damaged participants could be
attributed to damage in specific brain regions. Might
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
42
low performance scores result from damage in specific regions? One might expect that those participants
with the lowest scores would share in common damage to one or a few brain regions that are responsible
for their impairment; conversely, one would expect
that those participants with the highest scores would
not share damage in those same regions. We addressed this question by mapping the lesions of subpopulations of our brain-damaged patient sample as a
function of their task performances. The findings
from this analysis can be summarized as follows (see
Figure 4).
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
43
achieved statistical significance using a nonparametric test, the actual performance differences are small
(see Table 8), and it will be important in future studies
to follow up this finding, perhaps by using functional
imaging of facial and prosodic emotion in the same
individuals.
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
44
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
45
judgment of how intense the stimulus signals that particular emotion. The poor recognition revealed with
this measure could be produced for several different
reasons: Someone might be insensitive to all emotions; or they might be insensitive to sadness alone but
recognize other emotions normally; or they might be
sensitive to sadness but mistake it for another emotion. An exploration of these possibilities is offered by
the data in Table 6. A general finding (see Figure 1)
is that the bilateral frontal pole was important to judge
the intensity of highly arousing emotions, such as surprise, anger, and fear, whereas right frontoparietal regions were important to judge the intensity of sadness.
Our second measure, the correlation of an individuals ratings with normal ratings across all the different
labels, reflects a broader ability to judge the relative
intensity of multiple emotions signaled by a single
stimulus. The right frontoparietal cortex and the left
frontal operculum appeared critical on this measure,
whereas the frontal pole did not. Our third measure,
accuracy scores derived from the maximal intensity
rating, drew on brain regions that were a combination
of those revealed above: right somatosensory and motor-related cortices in the parietal and the frontal cortex, a small region in the left frontal operculum, and
also the frontal pole. This finding is not altogether
unexpected because the accuracy measure in fact
draws both on the absolute intensity that the stimulus
is judged to signal (as for the raw rating measure [1])
and on the relative magnitude of this intensity rating
compared with the ratings given to other emotions (as
for the correlation measure [2]).
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
46
berg, 1982; Jaencke, 1994) and that mimic the expression shown in the stimulus (Hess & Blairy, 2001);
moreover, such facial reactions to viewing facial expressions occur even in the absence of conscious recognition of the stimulus, for example, to subliminally
presented facial expressions (Dimberg, Thunberg, &
Elmehed, 2000). Viewing the facial expression of another can thus lead to changes in ones own emotional
state; this in turn would result in a re-mapping of
ones own emotional state, that is, a change in feeling
(Schneider, Gur, Gur, & Muenz, 1994; Wild, Erb, &
Bartels, 2001).
The above ideas could explain why damage to right
hemisphere regions that encompass motor- and somatosensory-related cortices were found to be associated with compromised emotion recognition in our
study. They might also provide an explanation for the
role that the basal ganglia have been found to play in
other studies: Like motor and premotor cortices, basal
ganglia would be recruited when the perceiver needs
to engage a routine that simulates the production of
the emotional state that they heard in the stimulus.
Our data from the present study are in line with those
from a prior study that investigated recognition of
emotion from facial expressions (Adolphs et al.,
2000) and point to the importance of both somatosensory and motor-related cortices in the right hemisphere (although, as noted above, it is possible that
either somatosensory or motor-related cortices could
be driving most of the effect because most individuals
in our sample had lesions encompassing both regions
to some extent, making the contribution of these two
regions statistically correlated). This should not be
altogether surprising because motor and somatosensory representations are two sides of the same coin:
producing and feeling an action. In the case of recognizing an emotion, simulation involves both the motor
and premotor cues for producing the stimulus perceived, and the somatosensory cues that would be
present if one were producing the stimulus.
In addition to the right frontoparietal regions, rating
the intensity of certain emotions expressed in prosody
depended on the ventral and polar frontal cortices.
This finding was obtained from individuals with bilateral frontal lesions, which typically encompassed
orbital and polar frontal cortex on both sides, and their
low ratings were notable for emotions of high arousal,
specifically surprise, fear, and anger (cf. Figure 1c
and Table 7). It is well known that such bilateral
frontal lesions result in an impaired ability to express
or experience emotional arousal normally (A. R.
Damasio, 1996; Tranel & Damasio, 1994), consistent
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
with the neuroanatomical projections from orbitofrontal cortices to structures involved in sympathetic autonomic control, such as the periaqueductal grey and
paraventricular hypothalamus (Morecraft, Geula, &
ngur et al., 1998). In line with our
Mesulam, 1992; O
above explanation according to the simulation theory,
it is therefore possible that bifrontal damage impairs
recognition of the intensity of highly arousing emotions because it impairs the ability to reconstruct somatic and experiential components of emotional
arousal.
A final region implicated by our findings is the left
frontal operculum, although low overall sampling
density in that region prevented this finding from
reaching full statistical significance. The same region
is also activated in some imaging studies of imitation
(Iacoboni et al., 1999) as well as in studies of emotional prosody (Imaizumi et al., 1997). As this region
is premotor cortex, its involvement is consistent with
the idea that it is part of a network of structures important for constructing a simulation. Because this
region is also important for the motoric aspects of
language, it is alternatively conceivable that the involvement of the left frontal operculum simply reflects the lexical demands made by our task: Participants had to give numerical ratings on a written
emotion label. However, this possibility is not supported, given that we found no correlation between
Verbal IQ and emotion recognition. Furthermore, if
the involvement of the left frontal operculum could be
attributed solely to compromised language function,
then one should expect more posterior regions in the
left hemisphere (specifically, Wernickes area rather
than Brocas area) to play an even greater role here,
because it is they that are involved in the comprehension of language. Given that we did not find any evidence for the involvement of left posterior cortices,
we think it unlikely that the role played by the left
frontal operculum can be attributed solely to its language functions.
Conclusion
Taken together, the findings emphasize the distributed nature of emotional prosody recognition. The
performance on our task draws on multiple cognitive
processes, subserved by multiple neural structures.
Cortical sectors in the right hemisphere appear to be
especially critical, and their location is consistent with
the hypothesis that the recognition of emotion in others requires the perceiver to reconstruct images of
somatic and motoric components that would normally
47
References
Adolphs, R., Damasio, H., Tranel, D., Cooper, G., & Damasio, A. R. (2000). A role for somatosensory cortices in the
visual recognition of emotion as revealed by 3-D lesion
mapping. The Journal of Neuroscience, 20, 26832690.
Adolphs, R., Damasio, H., Tranel, D., & Damasio, A. R.
(1996). Cortical systems for the recognition of emotion in
facial expressions. The Journal of Neuroscience, 16,
76787687.
Adolphs, R., Schul, R., & Tranel, D. (1997). Intact recognition of facial emotion in Parkinsons disease. Neuropsychology, 12, 253258.
Adolphs, R., & Tranel, D. (1999). Intact recognition of
emotional prosody following amygdala damage. Neuropsychologia, 37, 12851292.
Adolphs, R., Tranel, D., & Damasio, H. (2001). Emotion
recognition from faces and prosody following temporal
lobectomy. Neuropsychology, 15, 396404.
Adolphs, R., Tranel, D., Damasio, H., & Damasio, A.
(1994). Impaired recognition of emotion in facial expressions following bilateral damage to the human amygdala.
Nature, 372, 669672.
Adolphs, R., Tranel, D., Damasio, H., & Damasio, A. R.
(1995). Fear and the human amygdala. The Journal of
Neuroscience, 15, 58795892.
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
48
Adolphs, R., Tranel, D., Hamann, S., Young, A., Calder, A.,
Anderson, A., et al. (1999). Recognition of facial emotion
in nine subjects with bilateral amygdala damage. Neuropsychologia, 37, 11111117.
Anderson, A. K., & Phelps, E. A. (1998). Intact recognition
of vocal expressions of fear following bilateral lesions of
the human amygdala. NeuroReport, 9, 36073613.
Anderson, A. K., Spencer, D. D., Fulbright, R. K., & Phelps,
E. A. (2000). Contribution of the anteromedial temporal
lobes to the evaluation of facial emotion. Neuropsychology, 14, 526536.
Banse, R., & Scherer, K. R. (1996). Acoustic profiles in
vocal emotion expression. Journal of Personality and Social Psychology, 70, 614636.
Barrash, J., Damasio, H., Adolphs, R., & Tranel, D. (2000).
The neuroanatomical correlates of route learning impairment. Neuropsychologia, 38, 820836.
Barrett, A. M., Crucian, G. P., Raymer, A. M., & Heilman,
K. M. (1999). Spared comprehension of emotional
prosody in a patient with global aphasia. Neuropsychiatry, Neuropsychology, and Behavioral Neurology, 12,
117120.
Beck, A. T. (1987). Beck Depression Inventory. San Antonio, TX: Psychological Corporation.
Behrens, S. J. (1985). The perception of stress and lateralization of prosody. Brain and Language, 26, 332348.
Belin, P., Zatorre, R. J., Lafaille, P., Ahad, P., & Pike, B.
(2000). Voice selective areas in human auditory cortex.
Nature, 403, 309312.
Blonder, L. X., Bowers, D., & Heilman, K. (1991). The role
of the right hemisphere in emotional communication.
Brain, 114, 11151127.
Blood, A. J., Zatorre, R. J., Bermudez, P., & Evans, A. C.
(1999). Emotional responses to pleasant and unpleasant
music correlate with activity in paralimbic brain regions.
Nature Neuroscience, 2, 382387.
Borod, J. (1992). Interhemispheric and intrahemispheric
control of emotion: A focus on unilateral brain damage.
Journal of Consulting and Clinical Psychology, 60, 339
348.
Borod, J. C., Obler, L. K., Erhan, H. M., Grunwald, I. S.,
Cicero, B. A., Welkowitz, J., et al. (1998). Right hemisphere emotional perception: Evidence across multiple
channels. Neuropsychology, 12, 446458.
Bowers, D., Bauer, R. M., & Heilman, K. M. (1993). The
nonverbal affect lexicon: Theoretical perspectives from
neuropsychological studies of affect perception. Neuropsychology, 7, 433444.
Bowers, D., Coslett, H. B., Bauer, R. M., Speedie, L. J., &
Heilman, K. H. (1987). Comprehension of emotional
prosody following unilateral hemispheric lesions: Pro-
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
49
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
50
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
51