Download as pdf or txt
Download as pdf or txt
You are on page 1of 29

Emotion

2002, Vol. 2, No. 1, 2351

Copyright 2002 by the American Psychological Association, Inc.


1528-3542/02/$5.00 DOI: 10.1037//1528-3542.2.1.23

Neural Systems for Recognition of Emotional Prosody:


A 3-D Lesion Study
Ralph Adolphs, Hanna Damasio, and Daniel Tranel

This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

University of Iowa College of Medicine

Which brain regions are associated with recognition of emotional prosody? Are
these distinct from those for recognition of facial expression? These issues were
investigated by mapping the overlaps of co-registered lesions from 66 braindamaged participants as a function of their performance in rating basic emotions.
It was found that recognizing emotions from prosody draws on the right frontoparietal operculum, the bilateral frontal pole, and the left frontal operculum. Recognizing emotions from prosody and facial expressions draws on the right frontoparietal cortex, which may be important in reconstructing aspects of the emotion
signaled by the stimulus. Furthermore, there were regions in the left and right
temporal lobes that contributed disproportionately to recognition of emotion from
faces or prosody, respectively.

and give a brief overview of related issues, before


outlining the particular aims of this study.

The recognition of emotions in other persons is an


aspect of human social communication that draws on
diverse sensory channels, including both visual and
auditory modalities. Investigations of emotion recognition from prosody, facial expressions, and lexical
stimuli have benefited from the study of braindamaged individuals, whose impairments can help to
reveal the neural structures that participate in emotion
recognition. We used a novel neuroanatomical technique, the volumetric mapping of lesions from multiple individuals, to explore the component structures
that participate in the distributed neural system by
which humans recognize emotion from prosody. We
first review prior investigations of emotional prosody

Recognizing Emotional Prosody


Emotion can be signaled by a variety of auditory
stimuli, including voice and music in humans and
species-specific vocalizations in other animals. A
large body of research is investigating the question of
what auditory features these different types of stimuli
might share in common and of what auditory features
might reliably signal certain emotional meanings (see
Scherer, 1995, for review). Prosody (the nonlexical
component of speech) can communicate diverse kinds
of information, including information about the emotional state of a person. There is evidence that recognition of emotion from prosody is accomplished similarly across different cultures (Scherer, Banse, &
Wallbott, 2001; van Bezooijen, Otto, & Heenan,
1983), as is the case for the recognition of emotion
from facial expressions (Ekman, 1973; Ekman &
Friesen, 1975), and that this ability appears to emerge
consistently by the first 6 years of life (Matsumoto &
Kishimoto, 1983). These findings provide strong support for the view that innate mechanisms contribute
substantially to processing emotional prosody, and
hence presumably to the neural implementation of
such processing.
A comprehensive review of studies published prior
to the 1980s (Scherer, 1981) found consistent evi-

Ralph Adolphs, Hanna Damasio, and Daniel Tranel, Department of Neurology, University of Iowa College of Medicine.
This research was supported by a National Institute of
Neurological Disorders and Stroke Program Project Grant
to Antonio R. Damasio and by grants from the National
Institute of Mental Health, the Sloan Foundation, and the
EJLB Foundation to Ralph Adolphs. We thank Antonio
Damasio for helpful comments on earlier versions of the
article, Jeremy Nath for help with testing participants,
and Denise Krutzfeldt for help in scheduling their visits.
For more information on this topic, go to http://www.
medicine.uiowa.edu/adolphs.
Correspondence concerning this article should be addressed to Ralph Adolphs, Department of Neurology, University of Iowa College of Medicine, 200 Hawkins Drive,
Iowa City, Iowa 52242. E-mail: ralph-adolphs@uiowa.edu
23

This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

24

ADOLPHS, DAMASIO, AND TRANEL

dence that most basic emotions can be identified from


vocal stimuli at an accuracy well above chance (about
60% on average, depending on the emotion and the
study). This finding has been corroborated by subsequent investigations, all of which found accuracies
greater than or equal to 50% (Scherer, Banse, Wallbott, & Goldbeck, 1991; van Bezooijen et al., 1983),
whereas chance is at 1020%. Notable across all these
reviews is the finding that disgust is nearly impossible
to recognize accurately from prosody, a finding we
also encountered in piloting our stimuli and on the
basis of which we decided to exclude disgust as an
emotion in the present study (see the Method below).
Analyses of the specific auditory features of a
stimulus that might contribute to the identification of
particular emotions began in the 1960s (Lieberman &
Michaels, 1962), and have advanced in more recent
years with the availability of computer-aided synthesis and analysis tools. These studies have provided
evidence for both the idea that one or two simple cues
can be used by listeners to provide much of the information about the emotion in a prosodic stimulus
(such as Fo range and amplitude variation), and the
idea that the total number of cues available that can
influence emotion judgments is large and features
complex interactions (Scherer et al., 1991; Scherer &
Oshinsky, 1977). The further question of what brain
regions might be involved in processing particular
auditory features that can signal emotions is now being addressed in some studies.
The right hemisphere has been found to be disproportionately important for perceiving and recognizing
emotional prosody in most studies (Blonder, Bowers,
& Heilman, 1991; Bowers, Coslett, Bauer, Speedie, &
Heilman, 1987; Heilman, Bowers, Speedie, & Coslett,
1984) but not unanimously so (Cancelliere & Kertesz,
1990; Pell & Baum, 1997; van Lancker & Sidtis,
1992). There appears to be some right lateralized processing of the voice already at the level of auditory
cortex (Belin, Zatorre, Lafaille, Ahad, & Pike, 2000),
as well as for the acoustic analysis of melodies
(Zatorre, Evans, & Meyer, 1994). Particularly compelling was a study by Barrett, Crucian, Raymer, and
Heilman (1999) on a globally aphasic patient with a
large left hemisphere lesion: The patients ability to
recognize emotional prosody appeared normal on
matching tasks despite a severe inability to process
propositional speech. In a study of 27 patients with
damage in the right hemisphere, and of 25 with damage in the left, Schmitt, Hartje, and Willmes (1997)
found disproportionately impaired recognition of
emotion from facial and prosodic cues in the right-

hemisphere group when judging a large inventory of


multimodal stimuli (videoclips); however the stimuli
and task used in this study made serious attentional
demands on the participants, which could have contributed to the impairments reported. A similarly large
study by Peper and Irle (1997a) also supported the
idea that the right hemisphere is disproportionately
important to process emotion from prosody, with an
emphasis on processing emotional arousal. The above
findings and others have been used to argue for a set
of nonverbal affect processes that subserve social
communication in the right hemisphere (Bowers,
Bauer, & Heilman, 1993; Ross, 1985) and complement a large literature that shows impairments in understanding humor and other pragmatic conversational content following right-hemisphere damage
(Brownell, Michel, Powelson, & Gardner, 1983;
Kaplan, Brownell, Jacobs, & Gardner, 1990; Shammi
& Stuss, 1999; Wapner, Hamby, & Gardner, 1981).
The role of the right hemisphere in processing emotional prosody is corroborated by findings from studies using evoked potentials (Pihan, 1997) and functional imaging (Buchanan et al., 2000; George et al.,
1996; Imaizumi et al., 1997; Morris, Scott, & Dolan,
1999; Rama et al., 2001).
However, the story is more complicated than this.
Although the right hemisphere may in a global sense
contribute more to the perception of emotional
prosody than does the left hemisphere, and moreover
appears to make such a disproportionate contribution
for all emotions and across multiple communication
channels (Borod et al., 1998), it is now clear that both
hemispheres work together in processing emotional
prosody (Behrens, 1985; Buchanan et al., 2000; Morris et al., 1999; Pell, 1998; Ross, Stark, & Yenkosky,
1997). Lateralization of prosody perception toward
the right hemisphere appears to be most evident in
regard to emotional prosody as opposed to propositional prosody (e.g., the prosody involved in asking a
question or stating an affirmation; Heilman et al.,
1984), or recognition of emotion from the semantic
content of sentences (Blonder et al., 1991).
It has become clear that the recognition of emotion
from the voice draws on multiple prosodic cues,
which are in turn processed by systems that are neuroanatomically partly segregated toward one or the
other hemisphere (Ryalls, 1988; van Lancker &
Sidtis, 1992); as such, emotional prosody perception
appears to be much less clearly a lateralized process
than does the perception of propositional speech, contrary to earlier proposals (Ross, 1981, 1985). Recent
investigations have demonstrated clearly that the left

This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

NEURAL SYSTEMS FOR RECOGNITION OF EMOTIONAL PROSODY

hemisphere makes important contributions here.


Whereas pitch information (a major cue in emotional
prosody) appears to be processed disproportionately
by the right hemisphere (van Lancker & Sidtis, 1992;
see also Pell & Baum, 1997, for debate on the issue),
processing of prosodic cues such as emphasis or
stress, instead appear to rely more on left-hemisphere
regions (Pell, 1998; Pell & Baum, 1997), although
this may depend on the lexical nature of the stimuli
(Behrens, 1985). Although our study did not attempt to decompose the stimulus into its constituent cues, this would be an important direction to
take in future studies with brain-damaged participants. Also important is the suggestion by Ross
et al. (1997) that white matter lesions in the frontal
lobe, serving to disconnect communication between
anterior regions of the two cerebral hemispheres, contribute to impairments in processing prosodic informationan idea that, if borne out, could also account
for some of the observed deficits in the present and
other studies of patients with damage in the right frontal cortex.
Apart from the issue of hemispheric specialization,
it is interesting to investigate which specific brain
regions might be most critical to the recognition of
emotional prosody. Although a constellation of impairments in perceiving and producing emotional
prosody has been proposed as a feature of acute infarction of the right inferior middle cerebral artery
(Darby, 1993), studies of patients with chronic lesions, as well as functional imaging studies, have provided more detail and have consistently indicated regions in the right hemisphere that are more anterior.
Starkstein, Federoff, Price, Leiguarda, and Robinson
(1994) found evidence for the involvement of the
right frontoparietal regions, among others; Breitenstein, Daum, and Ackermann (1998) found evidence
for the involvement of the right frontal regions; and
Hornak, Rolls, and Wade (1996) found that orbital
frontal regions, predominantly on the right, are critical
to recognize emotional prosody. Functional imaging
studies have found significant blood flow changes in
the right dorsal and ventral prefrontal cortex (George
et al., 1996; Morris et al., 1999), as well as in the right
insula and right amygdala, when people listened to
emotional prosodic stimuli. A recent fMRI study of
emotional prosody examined the brain regions involved in considerable detail and found that processing emotional prosody activated regions in the right
hemisphere that included the right prefrontal cortex
and the right anterior parietal cortex, in addition to
regions in the left frontal lobe (Buchanan et al., 2000).

25

Across functional imaging studies, there are consistent activations in right inferior frontal regions
(Buchanan et al., 2000; George et al., 1996; Imaizumi
et al., 1997), regions also implicated in working
memory for prosody (Rama et al., 2001), as well as
occasional reports of activation in the left middle
frontal gyrus (Imaizumi et al., 1997). Affective processing of auditory stimuli other than prosody has
been shown to engage more orbital regions of the
prefrontal cortex (Blood, Zatorre, Bermudez, &
Evans, 1999; Frey, Kostopoulos, & Petrides, 2000).
However, processing emotions draws not only on
cortex; it also appears to involve subcortical structures. A subcortical structure that has emerged as potentially important in emotional prosody recognition
is the basal ganglia. In a study involving 46 braindamaged participants, and using a lesion overlap
analysis similar in spirit to ours, Cancelliere and
Kertesz (1990) found evidence that damage to the
basal ganglia, when present in addition to cortical
damage, was often associated with impaired prosody
recognition. Two comments are important to make
regarding this study. First, unlike the patients used in
our study, all patients in the study by Cancelliere and
Kertesz (1990) were studied in the acute epoch, likely
producing a rather different constellation of impairments than would have been obtained had they been
studied chronically. Second, the patients in that study
were not screened for confusional or attentional impairments, a fact the authors acknowledged in their
discussion. The study also did not report basic background neuropsychological information, such as IQ or
audiogram screening for the patients, leaving open the
possibility that the results could have been influenced
by these confounds. Nonetheless, the study is suggestive of a role for the basal ganglia, in conjunction with
cortical regions, in processing emotional prosody, a
conclusion also supported by functional imaging studies (Morris et al., 1999).
Findings consistent with the above study have been
documented in a recent study by Breitenstein et al.
(1998), which included both patients with focal cortical lesions as well as patients with Parkinsons disease. The study found that advanced stages of Parkinsons disease and focal lesions in the right frontal
regions were the only pathologies that resulted in impaired recognition of emotional prosody; the authors
interpreted the findings as evidence for a more distributed system, comprising frontal and striatal circuits, that participated in processing emotional
prosody. It is thus conceivable that damage to the
basal ganglia, under certain circumstances (advanced

This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

26

ADOLPHS, DAMASIO, AND TRANEL

Parkinsonism or acute stroke) may result in an imbalance in the right frontal regions with which the basal
ganglia are connected; the resulting fronto-striatal
dysfunction may contribute to the impairments observed. However, this leaves open the possibility, also
suggested by other studies of Parkinsons disease
(Adolphs, Schul, & Tranel, 1997), that chronic and
relatively restricted lesions to the basal ganglia are not
sufficient, by themselves, to result in impaired recognition of emotion.
A second subcortical structure that may participate
in recognizing emotional prosody is the amygdala.
However, the evidence for such a role is actually
much more solid in the domain of facial emotion than
it is in the domain of prosody. Although some lesion
studies have reported impaired recognition of emotional prosody following bilateral amygdala damage
(Scott et al., 1997), others have failed to find such an
effect (Adolphs & Tranel, 1999; Anderson & Phelps,
1998). There are a few recent functional imaging
studies that have reported activation of the amygdala
when people listen to emotional auditory stimuli
(Morris et al., 1999; Phillips et al., 1998), but the topic
needs further investigation.
Taken together, then, the studies to date point to the
following conclusions. First, recognizing emotional
prosody draws on multiple structures distributed between both the left and right hemispheres; second, the
roles of these structures are not all equal but may be
most apparent in processing specific auditory features
that provide cues for recognizing the emotion; third,
despite the distributed nature of the processing, the
right hemisphere, and in particular the right inferior
frontal regions, appear to be the most critical component of the system, working together with more posterior regions in the right hemisphere, the left frontal
regions, and subcortical structures, all interconnected
by white matter.

Recognizing Facial Emotion


The neural structures that subserve recognition of
facial affect have been investigated using both lesion and functional imaging methods. Several structures, including the amygdala (Adolphs et al., 1999;
Adolphs, Tranel, Damasio, & Damasio, 1994; Broks
et al., 1998; Young, Hellawell, Van de Wal, &
Johnson, 1996), orbitofrontal cortex (Hornak, Rolls,
& Wade, 1996), basal ganglia (Cancelliere & Kertesz,
1990), and regions in the right neocortex (Adolphs,
Damasio, Tranel, Cooper, & Damasio, 2000) have all

been shown to contribute to recognizing basic emotions from facial expressions. Much of the data from
lesion studies has come from single or multiple case
studies, although a few studies have examined neuroanatomical information from groups of lesion patients. Of particular relevance to the present investigation, in a prior study (Adolphs et al., 2000), we
investigated the recognition of basic emotions from
human facial expressions in 108 individuals with focal brain damage. That study revealed that somatosensory-related cortices in the right hemisphere were
critical to recognize emotion from facial expressions,
a finding we interpreted as evidence that the recognition of emotion in others requires the reconstruction in
the perceiver of somatosensory representations that
simulate what the signaled emotion would feel like
(cf. the Discussion for more details on this topic).
That finding, however, left open a critical question
regarding generality: Would similar regions be involved when recognizing emotion from stimuli other
than facial expressions, such as emotional prosody?

Theories of Lateralized Processing of Emotion


Both in the case of prosody and in the case of facial
expressions, the contributions made by the left and
right hemispheres to processing emotion in general,
and to processing specific emotions or aspects of
emotion in particular, have been the topic of much
debate (for reviews, see Borod, 1992; Davidson &
Hugdahl, 1995). Two main theories have been put
forth: that the right hemisphere participates in processing all emotions (the right hemisphere hypothesis)
or that the right hemisphere is relatively specialized to
process negative emotions, whereas the left hemisphere is relatively specialized to process positive
emotions (the valence hypothesis; for reviews, see
Borod et al., 1998; Canli, 1999). To date, there is
evidence that supports both the right hemisphere hypothesis (e.g., Borod et al., 1998; Burt & Perrett,
1997) and the valence hypothesis (e.g., Canli, 1999;
Jansari, Tranel, & Adolphs, 2000; Reuter-Lorentz &
Davidson, 1981). Modifications to these two alternatives propose that the valence hypothesis may indeed
hold for the experience and perhaps the expression of
emotions, but that the perception of emotion is better
described according to the right hemisphere hypothesis (Borod, 1992; Bryden, 1982; Canli, 1999; Davidson, 1993).
Taken together, the findings from normal individuals and from patients with unilateral brain damage

This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

NEURAL SYSTEMS FOR RECOGNITION OF EMOTIONAL PROSODY

broadly support the idea that the left and right hemispheres are differentially important in processing
emotion, but they are not unanimous in providing support for either the valence or the right hemisphere
hypotheses. The bulk of the data supports the idea that
the right hemisphere plays a disproportionate role in
perceiving emotions of negative valence, but a clear
asymmetry in perceiving emotions of positive valence
has not emerged.
It has been suggested that particular dimensions of
emotion, such as valence and arousal, might be processed by distinct brain systems. There is recent support for the hypothesis that the right hemisphere may
be especially important for recognizing the arousal
dimension in both prosodic (Peper & Irle, 1997a) and
facial (Peper & Irle, 1997b) expressions of emotion,
findings consistent with the right hemispheres demonstrated role in mediating arousal responses to a variety of emotional stimuli (Morrow, Vrtunski, Kim, &
Boller, 1981; Tranel & Damasio, 1994; Zoccolotti,
Scabini, & Violani, 1982).

Aims of the Present Study

27

task, and we focus on cortical regions, using the lesion


method in a large sample of well-characterized individuals with focal brain damage.
We hypothesized that damage to the right hemisphere would impair recognition of emotion from
prosody more than damage to left hemisphere would
and, furthermore, that damage to the right frontal cortices would result in the most severe impairments. To
test these hypotheses and to investigate additional
specific regions that might contribute to such impairment, we mapped lesions from 66 brain-damaged participants as a function of their task performances
onto a single, standardized brain-volume (the method
also was used in Adolphs et al., 2000, for analyzing regions involved in recognizing emotion from
facial expressions). Furthermore, we wanted to identify which structures might be important for the recognition of emotion across modalities and which
might be specific to a given modality. We addressed
this issue in 46 brain-damaged participants by comparing each persons recognition of emotion from
prosody to their recognition of emotion from facial
expression.

Method
As reviewed above, earlier work steered theories in
the direction of a clear right-hemisphere lateralization
for processing emotional prosody, analogous to the
left-hemisphere specialization for propositional language (e.g., Ross, 1981), whereas later models have
argued for two key modifications of this view. First, it
appears that both the right and left hemispheres participate in processing prosody, although they likely
make different contributions. Second, it seems clear
that perception of prosody is not a monolithic process
but rather draws on a complex set of multiple cues
provided by the stimulus; moreover, the particular
cues a listener may use to make judgments about the
stimulus can vary depending on the nature of the
stimulus, the demands of the task, and perhaps even
the idiosyncratic strategies used by that person. The
upshot of these more recent developments is not to
discard the right hemisphere hypothesis in its entirety,
but to acknowledge that, whereas the right hemisphere may be, on many tasks, more critical than the
left in processing emotional prosody, a comprehensive account of how we judge emotion from prosodic
stimuli points toward a set of processes implemented
in a distributed, bihemispheric neural system. The
present study aims to explore such a system by elucidating some of its component structures. We limit
ourselves to a particular set of stimuli and a single

Participants
We tested 66 participants with focal brain damage
and 14 controls whose demographic and neuropsychological background data are given in Table 1. Of
the brain-damaged participants, 25 had lesions in the
left hemisphere, 26 in the right hemisphere, and 15
had bilateral lesions in homologous regions (either
bilateral prefrontal or bilateral occipital). A summary
of the sampling density in each brain region is given
in Table 2, indicating that we sampled the entire cortex but that some regions were sampled more densely
than others. We attempted to include primarily individuals with cortical damage, and we excluded those
who had mostly white-matter lesions. This criterion
also led to a relatively small number of individuals
with damage to the basal ganglia.
All brain-damaged participants were selected from
the patient registry of the Division of Cognitive Neuroscience and Behavioral Neurology at the University
of Iowa School of Medicine and had been fully characterized neuropsychologically (by Daniel Tranel; cf.
Tranel, 1996) and neuroanatomically (by Hanna
Damasio; cf. H. Damasio & Frank, 1992; Frank,
Damasio, & Grabowski, 1997). They were carefully
screened to avoid the inclusion of individuals with
impairments that might confound possible impaired

28

ADOLPHS, DAMASIO, AND TRANEL

This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

Table 1
Demographics and Background Neuropsychology
Variable

Bilateral

Left

Right

Normal

Gender (F/M)
Age (M/SD)
Education (M/SD)
Verbal IQ (M/SD)
Perform. IQ (M/SD)
Aphasia (M/SD)
Depression (M/SD)
Etiology (M/SD)
Onsettest (M/SD)

8/7
58/9
12/3
99/16
105/21
0.4/0.8
0.3/0.6
5tum/10cva
8.7/5

10/15
46/16
13/3
99/14
107/13
0.3/0.5
0.4/0.7
16cva/9tlob
10.0/5

11/15
47/16
14/2
100/12
95/15
0
0.3/0.4
18cva/8tlob
6.8/4

7/7
52/16
13/2

Note. Groups are classified according to lesion side: bilateral, left, right, or normal control. Given are
gender ratio (F female, M male); age; verbal and performance (Perform.) IQ from either the revised
or the third edition of the Wechsler Adult Intelligence Scale (Wechsler, 1981); aphasia, as a composite
measure of residual aphasia, which was evaluated by Daniel Tranel from standardized neuropsychological tasks (Tranel, 1996) on a scale of 0 (none) to 2 (severe); depression, as a composite measure of
depression, which was evaluated by Daniel Tranel from two neuropsychological tasks, the Beck Depression
Inventory (Beck, 1987) and the Minnesota Multiphasic Personality Inventory (Greene, 1980), on a scale of
0 (none) to 2 (severe); etiology of the lesion in terms of three classifications; tumor resection (tum), cerebrovascular accident (cva), or temporal lobectomy (tlob); and years between the onset of the lesion and the
testing date. Dashes indicate that normal participants were not assessed for IQ, aphasia, or depression.

performance on our experimental task. We excluded


brain-damaged participants who were left-handed; the
mean handedness scores of all 66 participants was
98.7 (SD 5.9), ranging from scores of +80 (one
individual) to +100 (58 individuals; assessments according to the OldfieldGeschwind handedness questionnaire in all cases). All brain-damaged participants
also conformed to the inclusion criteria of the patient
registry: They had focal, chronic (> 4 months), stable,
acquired lesions that could be clearly demarcated on
MR or CT scans. All brain-damaged participants had
normal attentional abilities, had IQs in the normal
range, were not demented, and were not severely deTable 2
Neuroanatomical Distribution of Sampled Regions by
Hemisphere and by Lobe
Region
Left
Frontal
Parietal
Temporal
Occipital
Right
Frontal
Parietal
Temporal
Occipital

No. of individuals
17
6
13
7
13
9
10
5

Note. Individuals with bilateral lesions are included in both left


and right hemisphere groups; consequently the total number in the
table exceeds 66.

pressed or severely aphasic. Participants had lesions


resulting from stroke, meningioma resection, or temporal lobectomy for the treatment of epilepsy (see
Table 1). All participants, brain-damaged and normal,
had normal auditory acuity, assessed as a function of
frequency by measuring monaural hearing thresholds
for tones (250, 500, 750, 1000, 1500, 2000, 4000,
6000, 8000 Hz) with a calibrated audiometer (model
MA25, Maico Hearing Instruments, Minneapolis,
MN). We excluded individuals who had thresholds
greater than 40 dB at any frequency.
The second part of this study used a subset of 46 of
the brain-damaged participants, all of whom had previously participated in tasks of emotion recognition
from facial expressions (from Adolphs et al., 2000).
All participants had given informed consent to participate in these studies as approved by the Human
Subjects Committee of the University of Iowa.

Stimuli and Task


Stimulus construction. We used a task and stimuli
identical to those we have used in prior studies of
emotional prosody (Adolphs & Tranel, 1999; Adolphs,
Tranel, & Damasio, 2001). The same four semantically neutral English sentences were spoken by the
same female voice for each of five different prosodic
emotions. It is important to clarify the exact emotion
intended because some labels for emotions can be
ambiguous (Scherer, 1986). We used happiness (joy),
sadness, hot anger (high arousal anger), fear, and

29

This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

NEURAL SYSTEMS FOR RECOGNITION OF EMOTIONAL PROSODY

a happy surprise. We used the following four sentences: Men play football, There are trees in the
forest, This is my pencil, and People read books.
To verify that these sentences were indeed judged to
be semantically neutral, we asked 5 independent normal participants to rate written versions of these sentences, presented together with 30 other and more
emotional sentences, on a 4-point scale (0 entirely
neutral; 3 extremely emotional); all of the raters
gave ratings of 0 to each of these four sentences. We
chose a particular female speaker, on the basis of the
apparent quality of her prosody, after informal piloting with several of the staff in our department. The
female speaker was one of our staff unfamiliar with
any of the hypotheses of the study and had no formal
background in voice training. She was instructed to
produce sentences with the most clear and intense
emotion possible; although we discussed the intended
emotions with her at length, no particular emotional
scenarios were given to her to produce the stimuli.
After several practice sessions with the experimenters, she privately read aloud and recorded the four
above sentences in each of the five different emotional tones, in four separate recording sessions (thus
yielding four sets of 20 stimuli each; she also produced stimuli that were intended to signal disgust initially, but we omitted these from further inclusion
because they could not be recognized reliably). Final
stimuli were chosen by the experimenters from her
four sets on the basis of the intensity and clarity of the
emotion conveyed as well as the overall auditory
quality of the particular sample recorded. This yielded
a final total of 20 recorded sentences (the 4 sentences
the 5 emotions used), which were subsequently digitized at 22 kHz and normalized for average amplitude. The amplitude normalization was done to avoid
the possibility that participants could deduce the correct emotion simply by reasoning from perceived
loudness (see the Discussion for more details on this
issue). Sentences were played in randomized order to

participants at a level they indicated was sufficiently


loud. Participants themselves set the level of loudness
on three sample sentences at the beginning of the
experiment; most participants used a similar loudness
setting, although we did not measure this quantitatively. Each sentence had a duration of approximately
3 s, with a 10-s interstimulus interval. All stimuli were
presented in free field over stereo speakers (Harman/
Kardon HK195) connected to a Macintosh computer.
Stimulus validation. To verify the efficacy of the
stimuli in signaling the intended emotion, 10 normal
individuals (a subset of the 14 who also participated in
the other tasks) were asked to match the emotion signaled in the stimulus with one of five words for the
intended emotions (happy, sad, angry, afraid, and surprised). The normative data from this task are given in
Table 3, demonstrating that the stimuli reliably signaled the intended emotion. For these 10 normal participants, this matching task was carried out several
months before the data on the main experimental tasks
reported below were collected.
Experimental task. In a procedure identical to one
previously used with facial expressions (Adolphs,
Damasio, Tranel, & Damasio, 1996; Adolphs et
al., 1999, 2000; Adolphs, Tranel, Damasio, & Damasio, 1995), all participants, normal and braindamaged, heard the 20 sentences eight times and rated
them on a scale of 0 (not at all) to 5 (very much), with
respect to the labels awake, happy, sleepy, sad, angry,
afraid, disgusted, and surprised (one label per each of
the eight blocks). The reason that we chose the additional labels sleepy, awake, and disgusted was to include information about more emotions (disgust) and
about an arousal dimension (sleepy/awake), and also
to provide a longer vector of ratings against which an
individuals ratings could be correlated, thus reducing
the sensitivity of the correlation to a possible outlier
rating. Stimuli were rerandomized, using the program

Table 3
Normative Data for Stimuli: Frequencies With Which Stimuli Were Matched to a Given
Label by 10 Controls
Label chosen
Stimulus

Happy

Sad

Angry

Afraid

Surprised

Happy
Sad
Angry
Afraid
Surprised

0.95
0.00
0.00
0.03
0.33

0.00
0.88
0.18
0.00
0.00

0.00
0.00
0.83
0.00
0.00

0.00
0.13
0.00
0.70
0.00

0.05
0.00
0.00
0.28
0.68

This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

30

ADOLPHS, DAMASIO, AND TRANEL

Psyscope on a Macintosh, each time a new block was


presented (the program automatically records the random order of the stimuli). Furthermore, the order of
the labels on which the 20 stimuli were rated was
randomized (by the experimenter) between participants. Thus, a participant might hear all the 20 stimuli
and rate them on, for example, awake, then hear them
again in a new random order and rate them on, for
example, sad, and so on.
Participants were instructed to pay close attention
to the tone of the voice and to attempt to figure out
how the person saying the sentence might be feeling;
they were told that they would hear the same semantically neutral sentences over and over again and that
they should ignore the content of what the person was
saying and concentrate on her tone of voice. There
was no time limit in giving the ratings (no one required more than the 10-s interstimulus interval provided by our default program settings).

Analyses of Data
Our approach required the analysis of neuroanatomical data pertaining to the sites of lesions as a
function of the task performances of a given individual. Essentially, we asked how lesion location
would systematically covary with task performance. It
should be noted that we do not make any formal
claims about impairment in this study. We do not
specify a particular cut-off performance relative to
normal performance as impaired; rather we simply
rank-ordered the scores from all brain-damaged individuals and used a median-split to contrast low performance scores with high performance scores to
extract possible systematic relations between performance and neuroanatomical distribution of lesion location. To avoid confusion, it should be kept in mind
that we use the terms worse or better as meaning
lower performance score or higher performance
score in relation to the rest of the brain-damaged
individuals. Thus, stating that damage to a certain
brain region resulted in worse performance means, in
our context, that individuals who had lesions in that
brain region gave lower performance scores than did
individuals who had lesions elsewhere.
Analysis of task performance. We obtained three
different types of data from our task: (1) participants
ratings of stimuli on the intended emotion label, (2)
the correlations of participants ratings across all eight
labels with the mean ratings given by controls, and (3)
derived accuracy measures obtained from the maximal
intensity rating given on a particular emotion label.

For (1), we used the mean of a participants ratings


given to all four stimuli within an emotion category;
for example, the mean ratings on the label happy of all
four happy prosodic stimuli. This measure describes
the intensity of the prototypical emotion participants
judged each stimulus category to depict and gives an
index of the sensitivity to that particular emotion;
however it is potentially subject to both floor and
ceiling effects.
For (2), we correlated the ratings a participant gave
across all eight labels for each stimulus, with the
mean ratings given to that stimulus by the 14 controls.
This method is identical to one we and others have
previously used to analyze data from recognition of
emotion in facial expressions. This measure describes
the entire range of emotions judged for each stimulus
and is not subject to ceiling or floor effects. For all
averaging, we used Fisher Z-transforms to normalize
distributions, as described previously (Adolphs et al.,
1994, 1995).
For (3), we determined for each stimulus whether
the maximal intensity rating was given for the correct
(intended) emotion label (in which case the individual
obtained a score of 1) or whether it was not (in which
case the individual obtained a score of 0), similar to
what has been used in other studies (e.g., Scherer et
al., 1991). Ratings on the labels awake, sleepy, and
disgusted were omitted in this analysis because these
labels did not correspond to any intended prototypical
emotion in the stimuli. Each persons derived accuracy score was then averaged over stimuli within an
emotion category so that individuals obtained scores
on each emotion that were 0, 0.25, 0.5, 0.75, or 1.0,
corresponding to giving the maximal intensity rating
on the correct emotion label to none, one, two, three,
or all four of the stimuli in that emotion category.
Rarely, someone gave the same maximum rating on
more than one emotion label; in this case, ties were
scored as incorrect.
Additionally, we derived a fourth performance
measure to investigate participants performances in
rating prosodic stimuli as compared with their performances in rating facial expressions. This measure was
available for 46 brain-damaged individuals who had
participated both in the present study as well as in a
prior study that used facial expressions (Adolphs et
al., 2000). We derived contrasts between the rank order of brain-damaged participants on measure (2) with
respect to recognizing emotion from prosody, and the
rank order of participants with respect to recognizing
emotion from facial expressions. This within-subject
comparison was simply calculated as (rank order on

NEURAL SYSTEMS FOR RECOGNITION OF EMOTIONAL PROSODY

This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

faces) (rank order on prosody) and reflects differential performance in recognizing emotions from
these two classes of stimuli. We also derived a subset
of participants who had low scores on both faces and
prosody by including only those participants who had
the lowest scores (were in the bottom partition, cf.
below) on both measures; this sample consisted of 13
participants.
Analysis of neuroanatomical data. Our approach
in analyzing the data was to group individuals into
partitions on the basis of their performance scores and
to then visualize the lesion overlaps of all individuals
within a given partition. The general approach of
mapping overlaps of lesions onto a common brain has
been used in prior studies of emotional prosody (e.g.,
Cancelliere & Kertesz, 1990). For most of the analyses shown in the figures, we used a median-split
analysis: a contrast of the 50% of individuals with the
lowest scores versus the 50% with the highest scores.
Note that these neuroanatomical analyses are entirely
internal to the sample of brain-damaged individuals,
rather than comparing brain-damaged to normal performance. Consequently, we do not make claims
about the absolute level of impairment but only about
the relative level of performance within the braindamaged group. The question of interest now was the
following: Might there be regions of the brain that,
when lesioned, were associated with low performance
scores more often than with high performance scores?
Such regions are encoded in red in the figures (see
figure captions for details in each case) and show that
there were more individuals with lesions in that region
who fell into the partition with low performance
scores than those who fell into the partition with high
performance scores. This same approach has been
used successfully in analyzing the neural systems for
recognizing emotion from facial expressions (Adolphs
et al., 2000), for naming actions (Tranel, Adolphs,
Damasio, & Damasio, 2001), for naming concrete entities (H. Damasio, Tranel, Adolphs, Grabowski, &
Damasio, in press), and for spatial memory (Barrash,
Damasio, Adolphs, & Tranel, 2000).
We obtained all images using a method called
MAP-3 (Frank et al., 1997) . Briefly, the lesion visible
on each brain-damaged individuals MR or CT scan
was manually transferred onto the corresponding sections of a single, normal, reference brain, and lesions
from multiple participants were summed to obtain lesion density at a given voxel. We divided our sample
of 66 participants into two groups: the 33 with the
lowest and the 33 with the highest mean performance

31

in recognizing emotion from prosody. This mediansplit analysis was undertaken for each of the derived
measures that are described in the analyses above. A
lesion density image was generated for each group,
and the two images were then subtracted from one
another. This analysis resulted in images that showed,
for all participants who had lesions at a given voxel,
the difference between the number of participants in
the bottom half of the partition and the number of
participants in the top half of the partition. The analysis revealed particular regions (hot-spots) within
which lesions systematically resulted in lower performance.
To confirm the reliability of the results obtained
from all partitions that used a median split, we repeated all our analyses, as in Figure 2c (shown later),
using only a subset of individuals at either extreme of
the performance range and omitting a group in the
middle (specifically, the 20 with the lowest scores
versus the 20 with the highest scores, omitting the
middle 26). These analyses all showed very similar
overall patterns, although in the case of partitions with
fewer individuals they showed somewhat less detail
and smaller absolute lesion differences because of the
smaller sample size (compare Figures 2a and 2c
[shown later]). We consequently decided to show in
the figures the neuroanatomical analyses that use the
entire sample of participants and a median split.
In some of our analyses (see Figure 1 [shown
later]), examination of our partitions showed that
there were participants in the bottom (lower scores)
partition who had bilateral lesions. To investigate further the hot-spots that could be attributed to unilateral
lesions versus those that might result from bilateral
lesions, we created separate partitions for participants
with unilateral lesions and for participants with bilateral lesions. We proceeded by first using the partition
of all 66 participants as described above. Within the
bottom and top partitions, we then separated participants with bilateral lesions and participants with unilateral lesions.
We carried out statistical analyses on some of the
neuroanatomical results shown in Figures 1 and 2
(shown later) to establish their significance. This is
not possible to do in a global fashion for every voxel
because it would require an extremely large number
of corrections for multiple comparisons. Following
our prior procedure (Adolphs et al., 2000), we calculated probabilities for a few voxels located at the centroid of regions of maximal lesion overlap within a
given region of interest. The probability of obtaining
a given density of lesions in the bottom partition,

This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

32

ADOLPHS, DAMASIO, AND TRANEL

firmed that the highest mean ratings were assigned to


the emotion intended in the stimulus. Nonetheless, it
clear from Table 4 that every emotion received fairly
high ratings on multiple labels, generating the profile
of ratings across all the labels we used subsequently to
obtain our correlation measure. The data from normal
participants were not analyzed any further, and all
subsequent results pertain only to the data from the
brain-damaged group because we were interested in
investigating how performances among the braindamaged individuals might vary with respect to neuroanatomical location of the lesion.
We first analyzed the brain-damaged participants
ratings on the label of the intended emotion (the measure [1] described in the Method). Brain-damaged individuals as a group gave mean ratings to the stimuli
that were very similar to those given by controls but
showed a higher variance than controls for every emotion (see Table 5). Of all the emotions, fear received
the lowest mean intensity ratings for both normal and
brain-damaged individuals. We also correlated the
ratings given by brain-damaged participants (across
all the rating labels) with the mean ratings given by
controls; fear stimuli obtained the lowest correlation
in the brain-damaged groups with this measure (see
Table 5).
We carried out multiple linear regression analyses
with performance on our experimental task as the dependent variable and performance on all other neuropsychological tasks (cf. Table 1) as the predictor variables. When examining the mean rating for all the
stimuli, we found a significant regression only with
age (older individuals gave lower performances, R2
11%, t[64] 2.56, p < .05), as we have reported
previously for recognition of emotion in facial expressions (Adolphs et al., 1996). No other variables,
including Performance IQ, Verbal IQ, residual
aphasia, or depression, were significant. When
examining the mean correlation for all the stimuli,
the results were essentially identical; only age was a

given a certain overall sampling density at that same


location (the total number of patients with lesions at
that location, irrespective of their performance), can
be calculated directly from the binomial distribution
as P (N! / (N - k) !k!) pkq(N-k), where N equals the
total overlap density of lesions from all participants at
that voxel, k equals the overlap density of lesions
from participants in the bottom partition at that voxel,
and p and q are the probabilities that a subject chosen
at random would end up in the bottom or the top
partition (p q 0.5 in our case because we used
a median split). Note that all p values reported from
this calculation in the Results are uncorrected for multiple comparisons.

Results
Brain-damaged individuals had lesions in locations
that were distributed throughout the brain, although,
as expected, some regions were sampled more
densely than others (Table 2). Notably, a disproportionately large number of lesions overlapped in temporal pole, primarily because of the inclusion of several individuals who had lesions resulting from the
same surgical procedure (temporal lobectomy). As
Table 1 shows, there were no differences in background neuropsychological performances among participants with lesions in different hemispheres, with
the exception of Performance IQ, where there was a
significant difference between participants with unilateral lesions in the left and right hemispheres, t(48)
3.1, p < .005 (cf. Table 1).
Normal individuals chose the correct emotion label
a high proportion of the time when asked to choose
the one of the five words that best fit the stimulus (our
stimulus validation task; see Table 3). There was
more of a spread when they were asked to rate intensity and given additional labels on which to provide
those ratings (derived measure [1] from our experimental task; see Table 4), but these ratings also con-

Table 4
Normative Data for Stimuli: Mean Ratings Given by 14 Normal Participants to Each Class of Stimuli on Each of the
Emotion Labels That Were Rated
Label rated
Stimulus

Awake

Happy

Sleepy

Sad

Disgusted

Angry

Scared

Surprised

Happy
Sad
Angry
Afraid
Surprised

3.66
2.44
3.62
3.46
3.78

3.23
0.31
0.82
1.30
2.41

0.60
2.23
0.44
0.79
0.46

0.59
3.82
1.59
1.77
0.69

0.63
1.63
3.18
0.90
0.63

0.42
1.07
3.51
0.95
0.38

0.42
1.68
0.90
2.75
0.60

1.62
0.86
0.94
2.54
3.69

33

NEURAL SYSTEMS FOR RECOGNITION OF EMOTIONAL PROSODY


Table 5
Mean Ratings and Standard Deviations of the Intensity on the Intended Emotion Label

This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

Happy

Sad

Angry

Group

SD

SD

14 NC
66 BD

3.20
3.40

0.70
0.80

3.80
3.90

0.50
1.00

66 BD
40 BD

0.90
0.93

0.20
0.05

0.74
0.72

Afraid

Surprised

SD

SD

SD

0.70
1.20

2.80
2.80

1.20
1.40

3.70
4.00

0.80
1.30

Correlations
0.14
0.79
0.17
0.18
0.80
0.13

0.61
0.62

0.24
0.16

0.91
0.91

0.17
0.09

Ratings
3.50
3.40

Note. Shown are data concerning Figure 1 for normal and brain-damaged individuals (top) and data
concerning Figure 2 as correlations with normal ratings (bottom) from the entire participant sample of 66
as well as from the 40 used for Figure 2c. NC normal control participants; BD brain-damaged
participants.

significant predictor; R2 9%, t(64) 2.37,


p < .05.

Neuroanatomical Distribution of Rating Scores


To investigate the possible role that specific brain
regions might play in generating the ratings an individual gave, we divided brain-damaged individuals
into separate groups according to the ratings they gave
(performance measure [1] described in the Method)
and examined the overlaps of their lesions. The comparison of participants who gave the lowest ratings
with those who gave the highest ratings revealed focal
regions within which lesions were systematically associated with either low or high ratings. As detailed in
Figure 1, there were several clear patterns.
First, low ratings were associated with lesions in
the right hemisphere more than with lesions in the left
hemisphere. Patients with lesions in the left hemisphere gave, on average, ratings that were 0.42 0.19
higher than patients with lesions in the right hemisphere. A 2 5 analysis of variance of subject group
(only individuals with unilateral lesions in either the
left or right hemisphere were used in this analysis) by
emotion showed a significant effect of side of lesion,
F(1, 245) 4.80, p < .05, but no effect of emotion
and no interaction.
Second, when visualizing the lesion overlaps from
all 66 participants, two regions within the right hemisphere stand out as consistently associated with low
ratings for several emotions: the right frontal cortex
and the right temporal pole (see Figure 1a). Damage
in the right frontal cortex was associated with low
ratings for every emotion, whereas damage in the
right temporal pole was associated with low ratings
for all emotions except happiness and fear. The region
within the right frontal cortex encompassed frontal

operculum and included much of Brodmanns areas 9,


10, 44, 45, 46 (cf. Figure 4 [shown later]). Lesions in
the frontal pole (Brodmanns area 10) resulted in low
ratings especially for surprise, fear, and anger. It is
interesting that a different pattern was seen for sadness: damage in the right and in the left frontal operculum, but not in the frontal pole, resulted in the
lowest ratings. There was also some involvement of
right somatosensory-related cortices, including the
right insula, S-I, and S-II, a pattern most notable for
surprise, fear, and especially sadness. We calculated
difference images analogous to all of those shown in
Figure 1a also with a partition that compared the 20
participants with the lowest ratings with the 20 with
the highest ratings (as done for the correlation measures in Figure 2c, shown on page 36) and obtained
essentially the same pattern of results, confirming that
the data shown are robust.
We calculated the probabilities that the pattern of
lesion density we observed in Figure 1a could have
arisen by chance, in relation to two locations in the
right prefrontal cortex (indicated by arrows in Figure
1a). The more posterior location, in the right premotor
cortex, gave the following p values: p > .10 for happy
stimuli and p < .05 for all other emotions (uncorrected
p values from binomial distribution, cf. the Method).
The more anterior location, in the right frontal pole,
gave p values that were not significant (ps > .10) for
happy and sad stimuli, that did not quite achieve
significance for surprise stimuli (p .053), and
that did reach significance for fear and anger stimuli
(ps < .01).
Another interesting but complex question concerns
the precise reasons why lesions in the regions shown
in Figure 1a might result in low ratingsperhaps individuals with those lesions were generally insensitive

This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

34

ADOLPHS, DAMASIO, AND TRANEL

Figure 1. Lesion Distribution From Emotion Ratings. The neuroanatomical sites of lesions were analyzed as a function of
task performance from the ratings that were given to each emotion. MAP-3 images are shown in which the color at each voxel
represents the difference in the overlaps of lesions from those individuals with performances in the bottom half of the
distribution, compared with those with performances in the top half. The scale at the bottom shows how color corresponds to
number of lesions: Blue colors correspond to a larger number of lesions from individuals with performances in the top half;
red colors correspond to a larger number of lesions from individuals with performances in the bottom half. The performance
measure used to partition the group was the rating of the intensity of the intended emotion conveyed by prosodic stimuli (e.g.,
rating how happy a happy voice sounded).
a. Data from all 66 brain-damaged participants, including those with unilateral and those with bilateral lesions. White arrows
in the top image indicate the voxels in the right premotor and the right polar cortex for which the probabilities reported in the
Results were calculated.
b. Data from 52 of the 66 who had exclusively unilateral lesions.
c. Data from 14 of the 66 who had bilateral lesions. Note that b and c were generated as subsets of the same partition used
to derive the data in a.

to a particular emotion, or perhaps they mistook the


intended emotion for another emotion. Some clues to
this issue can be obtained from a detailed examination
of the profile of ratings given to each emotion, across

all the emotion labels. Table 6 shows such data, broken down for each of the groups that went into Figure
1a: the bottom (lowest ratings) and the top (highest
ratings) partitions that were made on the basis of rat-

This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

NEURAL SYSTEMS FOR RECOGNITION OF EMOTIONAL PROSODY

ing each of the five emotions on the intended label


(bold values in the tables). Across all the cells in the
table, a general observation is that the group in the
lowest partition tended to give low ratings across the
board, indicating that at least a subset of the regions
indicated in Figure 1a are responsible for a generally
reduced sensitivity to the intensity of emotion. It is
also informative to examine the values along the diagonal in each table, corresponding to rating the
stimulus on the intended emotion. Although every cell
in the diagonal has a lower value for the bottom 33
participants than for the top 33 participants (see Table
6), some cells show an especially large difference. Of
course, the cell indicated in bold would be expected to
show a large difference because this is the basis on
which the partitions were derived in the first place.
For happiness, all diagonal cells show a large difference, whereas for sadness the difference is large for
sad and less striking for other emotions. This indicates
that the regions associated with low ratings of happy
stimuli in Figure 1a are also associated with giving
low ratings to all other emotions; conversely, the regions associated with low ratings of sad stimuli shown
in Figure 1a appear to be relatively specific only to
giving low ratings on sad stimuli but not on stimuli
depicting other emotions. For surprise, the difference
in the diagonal elements is largest for surprise and
anger; for fear, the difference is largest for fear, anger,
and to some extent, surprise; and for anger, the difference is largest for fear, anger, and surprise. The
pattern that emerges from this latter examination is
that low ratings on fear, anger, and surprise tended to
go together, a pattern that also emerges from a visual
examination of Figure 1 and which we take up again
below.
Another interesting question that can be explored
from Table 6 concerns not the general sensitivity to
emotional intensity but the specificity with which
someone recognized the emotion. For this issue, we
look at the off-diagonal values. In general, ratings
given by individuals in the bottom partition to labels
other than the intended one were not all that different
from ratings given by individuals in the top partition
(as might be expected, both groups gave ratings that
approached floor in many cases). For happiness, the
low partition actually gave higher ratings to happy
stimuli on the labels sad, angry, and afraid, indicating
that some individuals who gave low ratings to happiness perhaps did so because they mistook it for another emotion. A similar effect was seen with anger,
to which the low partition gave higher ratings on the
label fear. These patterns indicate that both low sen-

35

sitivity and low specificity contributed to the lesion


sites identified in Figure 1.
The findings from Figure 1a raise the possibility
that the right frontal polar hot-spot might be attributable to participants with bilateral lesions because most
patients with damage to the frontal polar cortex had
bilateral lesions (typically due to the bifrontal meningioma resection). We therefore used the same partitions of participants as those used to derive Figure 1a
but further separated them into those participants with
unilateral lesions (Figure 1b) and those with bilateral
lesions (Figure 1c). The findings confirm that surprise, fear, and anger are given unusually low ratings
of intensity, on average, by individuals with bilateral
frontal polar damage (although there is substantial
variability between patients). Individuals with bilateral frontal polar lesions (N 7) gave ratings to fear
stimuli that were significantly lower than ratings
given by participants with unilateral lesions, t(20)
3.24, p < .005, and, on average, also gave low ratings
to anger, fear, and surprise (1.03, 1.02, and 0.54 SD
below the mean of controls) but not to happiness or
sadness (0.55 and 0.07 SD above the mean of controls), as detailed in Table 7 and as would be predicted
also from an inspection of Figure 1. However, as
Table 7 also emphasizes, there was substantial
variance between patients, reflecting the variability
in their anatomical lesion. Specifically, individuals
#318, #1983, and #2021 had the largest bilateral lesions that included most of the frontal polar cortex,
and these three also gave the lowest ratings on the
emotions mentioned above. The other patients, who
gave more variable ratings that were not as low, had
much smaller lesions (#770, #1815), lesions that were
very asymmetrical (mainly on one side and very small
on the other side; #1584, #770), or lesions that were
situated more posteriorly and encroached only minimally on the frontal pole (#500). Thus, it appears that
partial damage to sectors of the prefrontal cortex, including unilateral damage or damage sparing polar
cortex, is not generally sufficient to produce the low
ratings we found; large bilateral frontal polar lesions
are required.

Neuroanatomical Distribution of Correlation


and Accuracy Scores
To visualize the regions wherein lesions were associated with poor recognition due to abnormal ratings across several emotions, rather than due to low
judgments of the intensity of a single emotion, we
next analyzed the distributions of participants corre-

This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

36

ADOLPHS, DAMASIO, AND TRANEL

Figure 2. Lesion Distribution From Correlation and Accuracy Scores.


a. MAP-3 difference overlap images, calculated as before, are shown for the 33 individuals
with the highest correlations (most normal ratings; blue colors), compared with the 33 with
the lowest correlations (most abnormal ratings; red colors). The dependent measure was the
mean correlation across all emotions.
b. MAP-3 difference overlap images for a derived accuracy measure. The accuracy measure
was obtained from picking the maximum rating emotion and determining whether that fit the
intended emotion, and the measure took on the discrete values 0, 0.25, 0.5, 0.75, 1.0. This
accuracy measure was then averaged across all emotions to obtain the dependent measure
used for this neuroanatomical analysis. MAP-3 difference overlap images are shown for the
25 participants with the highest mean accuracy scores (blues), contrasted with the 25 who had
(Caption continued on following page.)

This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

NEURAL SYSTEMS FOR RECOGNITION OF EMOTIONAL PROSODY

lations of ratings given across all emotion labels


(measure [2] described in the Method; see Table 5).
Such correlations might give an indication of abnormal emotion recognition even with intact intensity
ratings (for instance, if someone gave high ratings to
a stimulus on every emotion label). We present here
the neuroanatomical analyses based on the mean correlation scores across all of the stimuli, an analytical
procedure identical to one we have previously published with respect to recognition of emotion from
facial expressions (Adolphs et al., 2000). Note that
this analysis does not permit us to draw conclusions
about individual emotions, as we did for Figure 1. As
Figure 2a shows, damage in left frontal operculum
(principally Brodmanns area 45; see Figure 4 [shown
later]), in sectors of the right frontal cortex and even
extending into the right somatosensory cortex in the
anterior parietal lobe, all interferes with recognition of
emotion from prosody when using this measure. The
statistical significance of voxels within the red region
in the right frontoparietal cortex shown in Figure 2a,
calculated from the binomial distribution of lesion
overlaps, was p < .01 (we sampled 10 different voxels
within this red region, all of which gave identical p
values). Voxels within the red region shown in the left
frontal operculum, however, did not quite reach significance (p .0625) because of the small total number of patients with lesion at that location (4 patients
total, all of whom were in the bottom partition).
We carried out a further analysis by partitioning
individuals on the basis of a derived accuracy score

37

(measure [3] described in the Method). This accuracy


score, obtained from the maximum of the ratings that
an individual gave to a stimulus, correlated significantly with the correlation scores used above when
averaged across all emotions, Spearmans R 0.27,
t(64) 2.2, p < .05, and correlated especially well
for those emotions that people often recognized incorrectly according to this derived accuracy measurehappiness: R 0.10, ns; sadness: R 0.10,
ns; anger: R 0.33, t(64) 2.7, p < .01; fear: R
0.61, t(64) 6.1, p < .00001; surprise: R 0.44,
t(64) 3.9, p < .0005. When we generated the
neuroanatomical subtraction image, using the mean
derived accuracy measure, we obtained findings
similar to those obtained with the correlation measure. Lesions in the right somatosensory and the right
frontal cortices were systematically associated with
low accuracy scores (see Figure 2b). This analysis
also showed a hot-spot in the right frontal pole, as we
had found with the analysis of the individual rating
scores.
To obtain another, more conservative contrast, we
repeated the analysis given for Figure 2a based on
the correlations but this time with only those 20 participants whose correlation scores were the lowest
contrasted with lesions from those 20 participants
whose correlation scores were the highest. This contrast of the extremes of performances confirmed the
above findings (see Figure 2c). In particular, the regions associated with abnormal emotion recognition
were the right frontal cortex, the ventral sectors of

Figure 2 (Continued)
the lowest mean accuracy scores (yellows and reds). This partition divided the group into
those with mean accuracy scores .07 and those with mean accuracy scores 0.8. The
middle 16 participants were excluded because they did not differ on their mean accuracy
scores.
c. MAP-3 difference overlap images of mean correlation scores, as in (a), but only for the 20
individuals at either extreme of the performance range. MAP-3 difference overlap images are
shown for the 20 with the highest correlations (most normal ratings; blue colors), compared
with the 20 with lowest correlations (most abnormal ratings; red colors). Data from the
middle 26 participants were not used in this analysis, so that we could obtain a contrast
between extremes of performance. This analysis corroborates the results obtained when data
from all 66 participants were analyzed. The bottom row shows coronal sections at the
locations indicated by the white lines in the three-dimensional images above.
d. Histograms of the number of individuals who had a given score, indicating the lower (red)
and upper (blue) partitions. On the left are the data from accuracy scores; the gray bar
indicates individuals with scores in the middle who were omitted from the analysis. On the
right are the data from correlations, showing the partitions into the bottom and top 20 (dark
red and blue) and into the bottom and top 33 (all reds and all blues). The purple bar in the
middle includes some individuals who were in the bottom-33 partition, and some who were
in the top-33 partition (not separated in this histogram because of the scale). In each graph,
the y-axis encodes number of individuals with a given score, and the x-axis gives the score.

38

ADOLPHS, DAMASIO, AND TRANEL


Table 6
Ratings Corresponding to Figure 1a
Label rated

This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

Stimulus

Happy

Sad

Angry

Afraid

Surprised

Happy
Sad
Angry
Afraid
Surprised

Partition: Happy ratingsbottom 33 participants


2.8
1.0
0.8
0.7
0.5
3.3
1.1
1.2
1.3
1.7
2.8
0.8
1.6
2.1
1.1
2.3
2.4
0.9
0.9
0.7

1.2
0.9
1.2
1.8
3.2

Happy
Sad
Angry
Afraid
Surprised

4.1
0.3
1.4
1.9
3.4

Top 33 participants
0.6
0.3
4.2
1.0
1.8
3.7
2.5
0.8
0.7
0.3

0.2
2.2
0.5
3.1
0.4

2.1
0.8
0.8
2.6
4.3

Happy
Sad
Angry
Afraid
Surprised

Partition: Sad ratingsbottom 33 participants


3.3
0.7
0.5
0.4
0.4
3.2
1.0
1.4
1.3
1.4
3.2
0.6
1.6
2.1
0.9
2.6
2.6
0.7
0.6
0.5

1.3
0.8
1.0
1.9
3.5

Happy
Sad
Angry
Afraid
Surprised

3.7
0.5
1.4
2.0
3.3

Top 33 participants
0.9
0.7
4.4
1.3
2.1
3.5
2.6
1.1
0.9
0.6

0.5
2.0
0.8
2.9
0.7

2.0
0.9
1.1
2.7
4.1

Happy
Sad
Angry
Afraid
Surprised

Partition: Anger ratingsbottom 33 participants


3.3
0.7
0.5
0.4
0.4
3.6
1.2
1.4
1.4
1.7
2.5
0.8
1.7
2.2
1.0
2.1
2.8
0.7
0.5
0.5

1.3
0.8
0.9
1.9
3.4

Happy
Sad
Angry
Afraid
Surprised

3.7
0.5
1.4
1.9
3.2

Top 33 participants
0.9
0.6
4.0
1.1
1.8
4.2
2.5
1.0
0.9
0.7

0.5
2.1
0.6
3.4
0.7

2.0
1.0
1.2
2.7
4.3

Happy
Sad
Angry
Afraid
Surprised

Partition: Fear ratingsbottom 33 participants


3.2
0.7
0.6
0.5
0.4
3.5
1.1
1.2
1.4
1.5
2.6
0.7
1.8
2.1
1.0
1.6
2.6
0.8
0.6
0.5

1.3
0.7
0.9
1.8
3.3

Happy
Sad
Angry
Afraid
Surprised

3.7
0.5
1.3
1.8
3.2

Top 33 participants
0.8
0.4
4.0
1.2
2.0
3.9
2.5
1.0
0.8
0.5

1.9
1.0
1.2
2.6
4.2

0.4
2.2
0.7
3.8
0.7

39

NEURAL SYSTEMS FOR RECOGNITION OF EMOTIONAL PROSODY


Table 6 (Continued)
Label rated

This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

Stimulus

Happy

Sad

Angry

Afraid

Surprised

Happy
Sad
Angry
Afraid
Surprised

Partition: Surprise ratingsbottom 33 participants


3.2
0.8
0.7
0.6
0.5
3.5
1.2
1.4
1.4
1.9
2.7
0.9
1.8
2.2
1.1
2.4
2.7
0.8
0.7
0.7

1.3
0.9
1.1
1.9
3.0

Happy
Sad
Angry
Afraid
Surprised

3.8
0.4
1.3
1.8
3.2

Top 33 participants
0.8
0.5
4.1
1.1
1.7
3.9
2.5
0.9
0.8
0.5

2.0
0.9
1.0
2.7
4.8

0.3
2.1
0.6
3.1
0.5

Note. Details of ratings, given on labels for different emotions, are shown for each of the partitions used
to generate the images shown in Figure 1a. The mean rating given to a label is shown in columns, and
the mean rating given to a stimulus class is given in rows. The particular rating on which the partition
was decided is shown in bold in each table.

the right frontoparietal operculum, right insula, and


a small region in the left frontal operculum. The concordant results given in Figures 2a, 2b, and 2c, using
three different approaches, demonstrate that the regions we identified play a role in recognizing emotion
from prosody because lesions within them systematically
result in lower scores than do lesions elsewhere in the
brain.
The distributions of scores from which we derived
the partitions used in Figures 2a, 2b, and 2c are shown
in the histograms in Figure 2d: on the left for accuracy
scores and on the right for correlations. Note that the
raw correlation scores are plotted, which are not normally distributed (their Z-transforms, however, would be).

Neuroanatomical Distribution of Scores for


Recognizing Emotion From Prosody Compared
With Scores for Recognizing Emotion From
Facial Expressions
To attempt to distinguish those regions that might
be associated with recognition about emotions across
modalities from those associated with recognition of
the emotion from a particular sensory modality, we
compared findings from the present study with those
from a previous study that investigated recognition of
emotion from facial expressions in some of the same
individuals who participated in the present experiment
(Adolphs et al., 2000). We focused our comparison on

Table 7
Ratings of Brain-Damaged Participants Compared With Normal Ratings (Z-Scores)
Side
Left (M )
Right (M )
Bilateral (M )

Participant no.

Happy

Sad

Angry

Afraid

Surprised

318
500
770
1584
1815
1983
2021

0.80
0.36
0.55
0.76
0.35
1.13
0.39
1.50
0.35
0.76

0.19
0.17
0.07
2.28
0.83
0.34
3.05
0.83
2.08
1.31

0.25
0.25
1.03
2.13
1.39
0.02
0.33
0.33
3.89
3.19

0.35
0.16
1.02
0.86
2.37
1.08
0.22
0.00
1.73
0.86

0.70
0.06
0.54
0.54
1.00
1.30
0.85
1.30
4.24
1.78

Note. Shown are ratings of the intensity of the emotion on its intended label in units of SD above or
below the normal control mean. Means are shown for participants with left, right, or bilateral frontal
damage, and individual data are shown for the 7 participants with bilateral frontal damage shown in
Figure 1c.

This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

40

ADOLPHS, DAMASIO, AND TRANEL

the measure that we have previously published for facial


expressions: the overall correlation of participants ratings with mean normal ratings, averaged across all emotions. This analysis was limited to 46 of our original 66
participants who had participated in both studies.
As Figure 3a shows, low correlation scores were
associated with damage in the right frontal cortex and
the lower tier of the right frontoparietal operculum as
well as with damage in left frontal operculum, both
when examining the data from prosody and when examining the data from facial expressions. Thus these
three regions are important in the recognition of emotion for both modalities. Damage to the right anterior
temporal lobe was associated with high correlation

scores for facial expressions, and damage to left anterior temporal lobe was associated with high correlation scores for prosody (see Figure 3a), raising the
possibility that the left and right temporal poles might
be differentially involved in recognizing emotion
from the face or the voice.
A within-subject comparison of performance on
both tasks confirmed this impression and offered additional findings. First (see Figure 3b), an analysis of
the overlaps of lesions of those participants who had
low scores (i.e., who were in the bottom 50% partition) from both prosody and facial expression (N
13) revealed a maximal overlap of lesions in the bilateral frontal operculum and the right frontoparietal
operculum. Second, damage to the left temporal pole
resulted in worse recognition of emotion from facial
expressions than from prosody, whereas damage to
the right temporal pole resulted in worse recognition
of emotion from prosody than from facial expression
(see Figure 3c). Both of these findings are consistent
with the data from prosody and from facial expression
analyzed separately (see Figure 3a). However, it is
important to point out that neither right nor left temporal pole damage in fact led to a particularly low
performance score, when compared with all other parFigure 3. Comparing Prosody and Facial Expression
Recognition. Data used to partition the groups were correlation scores for 46 individuals who participated in
rating emotions from both prosody and faces. MAP-3 images are shown in which color corresponds to the number
of participants with lesions at a given location, as indicated on the respective color scales. In all cases, the dependent measure was the mean correlation score across all stimuli.
a. Data from prosody and from facial expressions shown
individually. Color represents the difference between the
number of lesions from individuals in the bottom 50% compared with those in the top 50% (23 in each partition).
b. Location of lesions associated with compromised recognition of emotion from both prosody and faces. Shown are
the overlaps of lesions from individuals who were in the
bottom partitions both for faces and for prosody (N 13).
c. Location of lesions associated with differential performance on faces and prosody. To obtain these data, we first
calculated the difference in performances on prosody and on
faces (see Method). Overlaps of lesions from all individuals
in one of the two partitions of this derived difference measure are shown (22 were better on prosody than they were
on faces; 24 were better on faces than they were on
prosody). Note that this analysis yields the relative performances on faces as compared with those on prosody and
does not necessarily imply that participants were impaired
on either class of stimulus.

NEURAL SYSTEMS FOR RECOGNITION OF EMOTIONAL PROSODY


Table 8
Differences in Correlation Scores on Faces and Prosody
From 11 Individuals With Unilateral Anterior
Temporal Damage

This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

Individuals
Left

Right

.100
.060
.056
.027
.010
.008

.002
.017
.031
.097
.160

Note. Score is faces prosody: Negative values indicate better


performance on prosody; positive values indicate better performance on faces. Means for left- and right-brain-damaged individuals are 0.041 and 0.062, respectively. Standard errors of measurement for left- and right-brain-damaged individuals are 0.017 and
0.033, respectively.

ticipants (as indicated in Figure 3a, participants with


temporal polar damage in general had better performances than those with lesions elsewhere). Rather,
the pattern revealed in Figure 3c is a relative comparison of performances on recognizing emotion from
prosody, or from faces, within the same person (someone could perform better than the average braindamaged patient on both tasks but perform far above
average on prosody while performing only marginally
better than average on faces, resulting in a large difference score between the two). Out of 6 individuals
with left anterior temporal lobe lesions, 5 had a lower
correlation score on faces than they did on prosody,
whereas 5 out of 5 individuals with right anterior
temporal lobe lesions had a lower correlation score on
prosody than they did on faces. These findings are
summarized in Table 8, and in Figure 3c, where there
is maximal overlap of lesions in the left temporal pole
when emotion recognition is worse from facial expression than it is from prosody and maximal overlap
in right temporal pole when emotion recognition is
worse from prosody than it is from facial expression.
When comparing prosody and facial expression, the
difference between performances resulting from left
and right anterior temporal lobe damage was statistically significant (MannWhitney U test: p < .01). Because of the within-subject measure used (prosody vs.
facial expression in the same person), this double dissociation was statistically significant, even though
differences between participants with left and right
anterior temporal damage were not significant when
assessed for facial expression or for prosody in isolation (MannWhitney U tests: all ps > .10).

41

Discussion
The data from Table 5 show that brain damage
results in a larger variance in performance, compared
with controls, despite an essentially normal mean performance. This is especially so for negative emotions,
and it is a finding that is very general in neuropsychology. The reasons for it are that not all brain damage is equal: Whereas damage in some regions will
leave performance on a particular task unaffected,
damage elsewhere may result in severely impaired
performance. The aim of the present study concerned
the extent to which the variance seen in the performances given by brain-damaged participants could be
attributed to damage in specific brain regions. Might

Figure 4. Neuroanatomical Summary of Findings. Shown


at the top is a left lateral view of the human brain indicating
Brodmanns areas; shown below are left and right lateral
views in which the regions found to be associated with low
performance when lesioned are indicated in black. These
regions, presumed therefore to be important to recognize
emotion from prosody, include the bilateral frontal pole, the
left frontal operculum, and the right motor and somatosensory-associated cortices in the parietal and frontal lobes. It
is proposed that these regions constitute some of the key
components of a system by which we recognize emotions
from prosody.

This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

42

ADOLPHS, DAMASIO, AND TRANEL

low performance scores result from damage in specific regions? One might expect that those participants
with the lowest scores would share in common damage to one or a few brain regions that are responsible
for their impairment; conversely, one would expect
that those participants with the highest scores would
not share damage in those same regions. We addressed this question by mapping the lesions of subpopulations of our brain-damaged patient sample as a
function of their task performances. The findings
from this analysis can be summarized as follows (see
Figure 4).

Neuroanatomical Correlates of Recognition of


Emotional Prosody
First, lesions in frontal pole, frontal operculum and
lateral frontal cortex on the right (encompassing motor and premotor cortex), and the lower tier of frontoparietal operculum and insula on the right (including somatosensory-related cortices) were associated
with lower recognition scores. As well, there was
some involvement of sectors in the right anterior temporal lobe and in the left frontal operculum (Figures 1
and 2). The details here varied somewhat, depending
on which measure was used to assess recognition.
When using raw rating scores, lower emotional
prosody recognition was associated mostly with damage in the right frontal and temporal polar cortices
(see Figure 1). When using correlation scores, lower
emotional prosody recognition was associated mostly
with damage in the right frontoparietal cortex and the
left frontal operculum (see Figure 2a). Accuracy
scores showed some patterns common to both of these
(see Figure 2b).
Second, overall, the most robust finding was an
association between compromised recognition of
emotion from prosody and damage in the right frontoparietal cortices, specifically the frontal operculum
and the lower tier of frontoparietal operculum, a pattern that emerged when we analyzed the raw ratings,
correlations of the ratings, or derived accuracy scores.
They are also broadly consistent with several prior
studies that have implicated the right hemisphere, and
in particular the right prefrontal cortex, in recognition
of emotional prosody (cf. the Introduction). A recent
fMRI study of emotional prosody (Buchanan et al.,
2000) found activation, when controls processed emotional prosody, whose location was quite similar to
the sectors that our study revealed as critical: the right
frontal and the right anterior parietal cortex. These

brain regions are involved in premotor, motor, and


somatosensory functions.
Overall, the findings support the hypothesis that
structures in the right hemisphere play an important
role in the recognition of emotion from prosody, a
conclusion also advanced in prior studies. Our data
show statistically significant differences in performances given by participants with damage in the left
or in the right hemisphere. From an inspection of the
neuroanatomical figures, this lateralization of performance was most prominent for negatively valenced
stimuli and less prominent for happiness, as is also
consistent with a recent functional imaging study
(Buchanan et al., 2000). Although this lateralization is
in line with the idea that the right hemisphere is relatively specialized to process emotions, it does not address the further possibility that the right hemisphere
might be relatively specialized for recognition of
negatively valenced stimuli, as some have proposed
(cf. Davidson, 1992; Davidson & Hugdahl, 1995).
Despite the disproportionate importance of the right
hemisphere structures in recognizing emotion, we
also found evidence for the involvement of the left
frontal operculum, a region that may participate together with the right hemisphere in emotion recognition. The left frontal operculum has also been found to
be involved in functional imaging studies of emotional prosody processing (Imaizumi et al., 1997).
The set of regions revealed by our analysis bears
some similarity to the set of regions critical to recognize emotions from facial expressions (Adolphs et al.,
2000). It is thus plausible that there is a set of right
hemisphere structures shared across studies that have
investigated emotion recognition and that is critical to
retrieve knowledge about emotion regardless of the
type of stimulus used. To address this issue directly,
we provided data from 46 of the participants who had
rated emotion both in prosodic stimuli and in visually
presented facial expressions. Those findings (see Figure 3) can be summarized as follows.

Comparisons Between Faces and Prosody


First, recognition of emotion from both prosody
and facial expression was compromised by lesions in
the right frontal operculum and the lower tier of the
frontoparietal operculum, as well as in the left frontal
operculum.
Second, lesions in the anterior temporal lobe affected differentially the recognition of emotion from
facial expressions or from prosody, depending on

This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

NEURAL SYSTEMS FOR RECOGNITION OF EMOTIONAL PROSODY

whether they were located on the left or on the right,


respectively. Although such lesions resulted in systematic within-subject differences in the ability to recognize emotion from prosody as compared with facial
expressions, they did not reflect worse performances
compared with the rest of the brain-damaged participants.
Nearly all participants responsible for generating
the differences between faces and prosody, as shown
in Figure 3c, were individuals who had temporal lobectomy, with damage that included the temporal
pole, the amygdala, and the anterior parahippocampal
gyrus to variable degrees. Of particular interest is the
involvement of the amygdala, which has been demonstrated to play a key role in recognition of emotion
from facial expressions but regarding which there is
debate regarding recognition of emotional prosody
(cf. the Introduction). The present within-subject
comparisons provide a particularly powerful approach
to contrast processing of facial affect with prosodic
affect and point toward a possible differential role of
the amygdala in the left and right hemispheres: the left
amygdala may be more important in processing emotion from facial expressions, whereas the right amygdala may be more important in processing emotion
from prosody.
However, we should point out that a recent study
investigating recognition of facial emotion following
unilateral temporal lobectomy (Anderson, Spencer,
Fulbright, & Phelps, 2000), as well as our own observations (Adolphs et al., 2001), indicated that the
right, not the left, anteromedial temporal lobe is important for recognizing emotions from facial expressions. This finding would appear to be at odds with
the finding of the present study that damage to the left
anteromedial temporal lobe results in worse recognition of emotion from faces than from prosody. However, this discrepancy is only apparent: in fact, neither
participants with left nor those with right temporal
lobectomy in the present study were actually worse at
recognizing either emotional faces or prosody, compared with the other brain-damaged individuals (cf.
Figure 3a). It is only through comparing the relative
performances between recognizing emotion from
prosody and from faces, within the same participants,
that the pattern that we report in Figure 3c emerges.
The fact that a patient with a given lesion is slightly
worse in recognizing emotion from faces than from
prosody does not imply that such a lesion results in
worse recognition of emotion (in either modality)
when compared with participants with brain damage
elsewhere. Although our reported dissociation

43

achieved statistical significance using a nonparametric test, the actual performance differences are small
(see Table 8), and it will be important in future studies
to follow up this finding, perhaps by using functional
imaging of facial and prosodic emotion in the same
individuals.

Limitations of the Method


There are several important issues to consider in
relation to the stimuli we used. Our aim was to use a
natural, relatively unconstrained set of vocal stimuli
that would nonetheless clearly convey prototypical,
basic emotions. We did not further process or manipulate the stimuli, with one notable exception: We did
normalize them for mean amplitude (mean amplitude
over the entire duration of the stimulus and over all its
frequency components). Although this may have introduced an artifact in its own right (such as the perception of hearing someone speak loudly but at a
greater distance), this manipulation would remove a
possible cue (the relative loudness) that we did not
want people to use when giving their ratings. Our
consideration of this issue was informed by prior observation with brain-damaged patients who may have
difficulty recognizing emotion and who will quickly
adopt alternative compensatory strategies to produce
an often normal-seeming performance. For instance, a
loud angry stimulus might fail to be recognized as
angry by a brain-damaged patient on the basis of its
spectral and rhythmic qualities alone, but the individual might nonetheless reason that, because the
stimulus was overall louder than the others, this
should be anger. We wanted to eliminate as much as
possible the availability of such alternate, simple
strategies. It should be noted that the only class of
stimuli that was noticeably changed by the amplitude
normalization were our anger stimuli.
Another issue concerns the particular sentences we
used. It has been shown that, under some circumstances, the semantic content of sentences can interact
with their prosodic content. Individuals with damage
in the right hemisphere can be impaired not only in
their ability to perceive and categorize the emotional
prosody, but they can also show increased interference between the prosodic and the semantic contents
of spoken sentences (Bowers et al., 1987). It is consequently imperative to ensure that the stimuli used in
any study investigating emotional prosody are semantically neutral (e.g., Heilman, Scholes, & Watson,
1975; Peper & Irle, 1997a) or, better yet, semantically
uninterpretable (Banse & Scherer, 1996; Scherer et

This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

44

ADOLPHS, DAMASIO, AND TRANEL

al., 1991). We chose sentences that would be expected


to be relatively neutral in their semantic content, and
we verified in normal people that these sentences
were indeed judged to be semantically neutral with
respect to their lexical content (see the Method). Furthermore, all our analyses use a dependent measure
that averages over the four sentences, thus reflecting
the prosodic content that they share in common rather
than the semantic aspects on which they might differ.
It remains possible, of course, that even though the
sentences were semantically neutral, their semantic
content interacted with the prosody in some complex
fashion to influence the emotion judged. This issue
could be addressed in future studies using stimuli that
are composed of speech sounds without lexical meaning, as others have done (Banse & Scherer, 1996;
Scherer et al., 1991).
Finally, of course, our findings are limited to the
emotions that we used in the stimuli. A further open
question concerns how broad the role played by the
neural regions we identified might be. For example,
might they participate also in processing information
about general emotional valence or general arousal
from prosody? Some indications along these lines
emerge from our article: Frontal polar lesions appear
to result in an inability to recognize several emotions
of high arousal. It remains an open issue how best to
conceptualize the mapping between the pattern of
acoustic parameters provided in a stimulus and the
membership of that stimulus in a given emotion category, and the further question of how this relates to
processing by the brain. One possibility is that we
should focus on the standard emotion categories; another possibility is that we should focus instead on the
basic auditory parameters. A position that may reconcile these views on either end of the spectrum is the
idea that emotions should be considered as assembled
dynamically from patterns of auditory parameters that
correspond to psychologically meaningful basic components; there are data from normal people that support this model (Banse & Scherer, 1996), but it remains to be explored in neuroscience studies.
Although the present study defaults to the use of
basic emotion categories, we fully acknowledge
that considerably more theoretical work needs to be
done in deciding how best to group different stimuli
into neurobiologically meaningful categories.

Limitations of the Analysis


There are two primary technical limitations regarding our study. First, the neuroanatomical conclusions

that we draw are limited by the homogeneity and


sampling density of the lesions. Not all brain regions
were sampled with equal density (cf. Table 2), and not
all lesions were of the same size, involved the same
proportion of white matter as compared with gray
matter, or were of the same etiology. It is particularly
important to bear in mind that negative findings in our
neuroanatomical analyses (regions that look only
green or yellow but not red or dark blue), can result
simply from sparse overall sampling of that region;
that is, it could be that only a small number of total
participants had lesions in a given region, making it
impossible to observe differences in lesion density
that would achieve strong red or blue colors in the
images. We reported probabilities, calculated from the
binomial distribution of the observed lesion densities,
for some voxels within the red hot-spots reported,
indicating their statistical significance.
Even more important to keep in mind is the fact that
different voxels, each of which may have achieved
statistical significance, are not statistically independent. Most of the lesions in the right premotor and the
right somatosensory cortex resulted from occlusion of
the middle cerebral artery and yielded low performance scores (individuals with such lesions ended up
in the bottom partition of rank-ordered scores). All
that we can conclude with certainty at this point is that
there is some subset of anatomical locations within
right somatosensory/motor/premotor cortex that results in low performance scores, as borne out by finding p values less than .01 for voxels within this region
(cf. Results). However, identifying specific regions
(for instance, comparing the relative importance of
somatosensory and premotor cortices) would require
studies of patients with lesions restricted to one of
those two regions. Replications of the present findings
with additional lesion studies as well as with studies
using functional imaging in normal individuals, will
be important to confirm and further specify the roles
of the neural regions we identified here.
Equally important is the acknowledgement that the
regions wherein damage resulted in worse performance in our study included white matter and subcortical structures. White matter damage might serve to
disconnect cortical regions; for instance, damage to
white matter underlying the right frontoparietal cortex
might disconnect cortex involved in somatomotor representation from the auditory cortices that could provide perceptual information about the stimulus. Ross
et al. (1997) have shown evidence that white matter
lesions in the frontal lobe, serving to disconnect communication between the anterior regions of the two

This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

NEURAL SYSTEMS FOR RECOGNITION OF EMOTIONAL PROSODY

cerebral hemispheres, contribute to impairments in


processing prosodic information. Similarly, there is
the possibility that some of the impairments we report
could be attributable, at least in part, to damage in the
basal ganglia, as consistent with some prior studies
(Cancelliere & Kertesz, 1990; Starkstein et al, 1994).
As Figure 2 shows, some of our hot spots in the right
hemisphere did indeed include the right basal ganglia,
although the absence in our study of individuals with
damage restricted to this structure does not permit any
further conclusions on the basis of the present findings.
A second note concerns our analysis. We compared
brain-damaged participants with low scores to those
with high scores. However, the participants with low
scores were not necessarily impaired in the sense that
their scores were below a specified cut-off relative to
the performance of control participants. In fact, we
did not contrast brain-damaged with normal individuals; we only contrasted different groups of braindamaged individuals on the basis of their relative performance. Although our findings do reveal structures
wherein lesions systematically result in low performance, they do not show that such lesions render the
person entirely unable to perform the task. Instead, the
findings are best interpreted as revealing structures
that are components of the distributed network we
outlined in the Introduction (cf. Figure 4).

What Exactly Does Low Performance Reflect?


We used several different analyses to show that
specific regions of the brain, when damaged, result in
compromised processing of emotion from the voice.
However, precisely what do our measures reflect? In
broad scope, all our measures reflect the ability to
recognize the emotion signaled by the auditory stimulus, but different measures focus on different aspects
of recognition.
Such recognition is not all-or-none: One could recognize that someone is feeling unpleasant without being able to distinguish whether they are feeling sad,
angry, or disgusted; one could recognize that someone
is feeling afraid without recognizing normally the intensity of fear that is signaled, and so on. Depending
on the information provided by the stimulus and its
context, and depending on the specific task required
in the experiment, an individual could provide evidence of recognition on one measure but not another.
What then does our first dependent measure reflect,
the rating given to the stimulus on the intended emotion label? This measure reflects the individuals

45

judgment of how intense the stimulus signals that particular emotion. The poor recognition revealed with
this measure could be produced for several different
reasons: Someone might be insensitive to all emotions; or they might be insensitive to sadness alone but
recognize other emotions normally; or they might be
sensitive to sadness but mistake it for another emotion. An exploration of these possibilities is offered by
the data in Table 6. A general finding (see Figure 1)
is that the bilateral frontal pole was important to judge
the intensity of highly arousing emotions, such as surprise, anger, and fear, whereas right frontoparietal regions were important to judge the intensity of sadness.
Our second measure, the correlation of an individuals ratings with normal ratings across all the different
labels, reflects a broader ability to judge the relative
intensity of multiple emotions signaled by a single
stimulus. The right frontoparietal cortex and the left
frontal operculum appeared critical on this measure,
whereas the frontal pole did not. Our third measure,
accuracy scores derived from the maximal intensity
rating, drew on brain regions that were a combination
of those revealed above: right somatosensory and motor-related cortices in the parietal and the frontal cortex, a small region in the left frontal operculum, and
also the frontal pole. This finding is not altogether
unexpected because the accuracy measure in fact
draws both on the absolute intensity that the stimulus
is judged to signal (as for the raw rating measure [1])
and on the relative magnitude of this intensity rating
compared with the ratings given to other emotions (as
for the correlation measure [2]).

Functional Significance of Findings for


Emotion Recognition
In a prior study (Adolphs et al., 2000), we suggested that the right hemisphere regions related to
somatosensory processing would be important in order to recognize emotions from facial expressions.
We reasoned, on both theoretical and empirical
grounds, that knowledge about emotions is strongly
associated with knowledge about body states (A. R.
Damasio, 1994) and that the reconstruction of knowledge about emotions expressed by other people might
rely on a simulation of how the emotion would feel in
the perceiver (the person may be unaware of the process of simulation, which could operate covertly). The
notion that somatosensory simulation is important to
retrieve knowledge about emotions signaled by other
individuals is related to the idea that mental simula-

This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

46

ADOLPHS, DAMASIO, AND TRANEL

tion guides our knowledge of what goes on in the


minds of others, a proposal that has been put forth by
philosophers and cognitive scientists (Carruthers &
Smith, 1996; Goldman, 1992).
The hypothesis that humans recognize some social
stimuli by mimicking or simulating aspects of their
production has a long history, going back to the early
ideas of Lipps (1907) and the subsequent development of the motor theory of speech perception (Liberman, Cooper, Shankweiler, & Studdert-Kennedy,
1967) in the auditory domain, and relating to the finding that infants mimic facial gestures that they see
other people perform (Meltzoff & Moore, 1977) in the
visual domain.
The simulation hypothesis has recently received
considerable attention. In the premotor cortex of monkeys, Rizzolatti and colleagues have reported neurons
that respond not only when the monkey prepares to
perform an action itself but also when the monkey
observes the same visually presented action performed by someone else (Gallese, Fadiga, Fogassi, &
Rizzolatti, 1996; Gallese & Goldman, 1999; Rizzolatti, Fadiga, Gallese, & Fogassi, 1996). Various
supportive findings have also been obtained in humans: Observing anothers actions results in desynchronization in motor cortex as measured with magnetoencephalography (Hari et al., 1998) and lowers
the threshold for producing motor responses when
transcranial magnetic stimulation is used to activate
motor cortex (Strafella & Paus, 2000); imitating anothers actions by means of observation activates premotor cortex in functional imaging studies (Iacoboni
et al., 1999); moreover, such activation is somatotopic
with respect to the body part that is observed to perform the action, even in the absence of any overt
action on the part of the perceiver (Buccino et al.,
2001). It thus appears that primates construct motor
representations suited to performing the same action
that they visually perceive someone else perform, in
line with the simulation theory.
The specific evidence that simulation may play a
role also in recognition of the actions that comprise
emotional signals comes from disparate experiments
but is strongest in the case of recognizing emotions
from facial expressions. Producing a facial expression
to command influences the feeling and autonomic
correlates of the emotional state (Levenson, Ekman,
& Friesen, 1990) as well as its electroencephalographic correlates (Ekman & Davidson, 1993). Viewing facial expressions in turn results in expressions on
ones own face that may not be readily visible but that
can be measured with facial electromyography (Dim-

berg, 1982; Jaencke, 1994) and that mimic the expression shown in the stimulus (Hess & Blairy, 2001);
moreover, such facial reactions to viewing facial expressions occur even in the absence of conscious recognition of the stimulus, for example, to subliminally
presented facial expressions (Dimberg, Thunberg, &
Elmehed, 2000). Viewing the facial expression of another can thus lead to changes in ones own emotional
state; this in turn would result in a re-mapping of
ones own emotional state, that is, a change in feeling
(Schneider, Gur, Gur, & Muenz, 1994; Wild, Erb, &
Bartels, 2001).
The above ideas could explain why damage to right
hemisphere regions that encompass motor- and somatosensory-related cortices were found to be associated with compromised emotion recognition in our
study. They might also provide an explanation for the
role that the basal ganglia have been found to play in
other studies: Like motor and premotor cortices, basal
ganglia would be recruited when the perceiver needs
to engage a routine that simulates the production of
the emotional state that they heard in the stimulus.
Our data from the present study are in line with those
from a prior study that investigated recognition of
emotion from facial expressions (Adolphs et al.,
2000) and point to the importance of both somatosensory and motor-related cortices in the right hemisphere (although, as noted above, it is possible that
either somatosensory or motor-related cortices could
be driving most of the effect because most individuals
in our sample had lesions encompassing both regions
to some extent, making the contribution of these two
regions statistically correlated). This should not be
altogether surprising because motor and somatosensory representations are two sides of the same coin:
producing and feeling an action. In the case of recognizing an emotion, simulation involves both the motor
and premotor cues for producing the stimulus perceived, and the somatosensory cues that would be
present if one were producing the stimulus.
In addition to the right frontoparietal regions, rating
the intensity of certain emotions expressed in prosody
depended on the ventral and polar frontal cortices.
This finding was obtained from individuals with bilateral frontal lesions, which typically encompassed
orbital and polar frontal cortex on both sides, and their
low ratings were notable for emotions of high arousal,
specifically surprise, fear, and anger (cf. Figure 1c
and Table 7). It is well known that such bilateral
frontal lesions result in an impaired ability to express
or experience emotional arousal normally (A. R.
Damasio, 1996; Tranel & Damasio, 1994), consistent

This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

NEURAL SYSTEMS FOR RECOGNITION OF EMOTIONAL PROSODY

with the neuroanatomical projections from orbitofrontal cortices to structures involved in sympathetic autonomic control, such as the periaqueductal grey and
paraventricular hypothalamus (Morecraft, Geula, &
ngur et al., 1998). In line with our
Mesulam, 1992; O
above explanation according to the simulation theory,
it is therefore possible that bifrontal damage impairs
recognition of the intensity of highly arousing emotions because it impairs the ability to reconstruct somatic and experiential components of emotional
arousal.
A final region implicated by our findings is the left
frontal operculum, although low overall sampling
density in that region prevented this finding from
reaching full statistical significance. The same region
is also activated in some imaging studies of imitation
(Iacoboni et al., 1999) as well as in studies of emotional prosody (Imaizumi et al., 1997). As this region
is premotor cortex, its involvement is consistent with
the idea that it is part of a network of structures important for constructing a simulation. Because this
region is also important for the motoric aspects of
language, it is alternatively conceivable that the involvement of the left frontal operculum simply reflects the lexical demands made by our task: Participants had to give numerical ratings on a written
emotion label. However, this possibility is not supported, given that we found no correlation between
Verbal IQ and emotion recognition. Furthermore, if
the involvement of the left frontal operculum could be
attributed solely to compromised language function,
then one should expect more posterior regions in the
left hemisphere (specifically, Wernickes area rather
than Brocas area) to play an even greater role here,
because it is they that are involved in the comprehension of language. Given that we did not find any evidence for the involvement of left posterior cortices,
we think it unlikely that the role played by the left
frontal operculum can be attributed solely to its language functions.

Conclusion
Taken together, the findings emphasize the distributed nature of emotional prosody recognition. The
performance on our task draws on multiple cognitive
processes, subserved by multiple neural structures.
Cortical sectors in the right hemisphere appear to be
especially critical, and their location is consistent with
the hypothesis that the recognition of emotion in others requires the perceiver to reconstruct images of
somatic and motoric components that would normally

47

be associated with producing and experiencing the


emotion signaled in the stimulus. On the basis of the
present findings, as well as on that of prior studies, a
preliminary sketch of the neural structures that would
contribute toward emotion recognition can be made.
Primary and high level auditory cortices would participate in early and later perceptual processing of the
auditory features of a stimulus; the amygdala and the
orbital and polar frontal cortices would help to link
these perceptual auditory representations to structures
that can enact emotional responses or that can reconstruct associated knowledge; motor and premotor
structures would play a role in simulating components
of the emotional body state normally associated with
producing the emotion (including the right premotor
cortices, possibly the left frontal operculum, and also
possibly the basal ganglia); and finally, somatosensory structures would in turn represent the emotional
body states simulated by the motor structures (specifically, somatosensory-related cortices in the right
hemisphere). These sets of structures thus implement
some of the various component processes whose end
result is recognition of the emotion from prosody.

References
Adolphs, R., Damasio, H., Tranel, D., Cooper, G., & Damasio, A. R. (2000). A role for somatosensory cortices in the
visual recognition of emotion as revealed by 3-D lesion
mapping. The Journal of Neuroscience, 20, 26832690.
Adolphs, R., Damasio, H., Tranel, D., & Damasio, A. R.
(1996). Cortical systems for the recognition of emotion in
facial expressions. The Journal of Neuroscience, 16,
76787687.
Adolphs, R., Schul, R., & Tranel, D. (1997). Intact recognition of facial emotion in Parkinsons disease. Neuropsychology, 12, 253258.
Adolphs, R., & Tranel, D. (1999). Intact recognition of
emotional prosody following amygdala damage. Neuropsychologia, 37, 12851292.
Adolphs, R., Tranel, D., & Damasio, H. (2001). Emotion
recognition from faces and prosody following temporal
lobectomy. Neuropsychology, 15, 396404.
Adolphs, R., Tranel, D., Damasio, H., & Damasio, A.
(1994). Impaired recognition of emotion in facial expressions following bilateral damage to the human amygdala.
Nature, 372, 669672.
Adolphs, R., Tranel, D., Damasio, H., & Damasio, A. R.
(1995). Fear and the human amygdala. The Journal of
Neuroscience, 15, 58795892.

This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

48

ADOLPHS, DAMASIO, AND TRANEL

Adolphs, R., Tranel, D., Hamann, S., Young, A., Calder, A.,
Anderson, A., et al. (1999). Recognition of facial emotion
in nine subjects with bilateral amygdala damage. Neuropsychologia, 37, 11111117.
Anderson, A. K., & Phelps, E. A. (1998). Intact recognition
of vocal expressions of fear following bilateral lesions of
the human amygdala. NeuroReport, 9, 36073613.
Anderson, A. K., Spencer, D. D., Fulbright, R. K., & Phelps,
E. A. (2000). Contribution of the anteromedial temporal
lobes to the evaluation of facial emotion. Neuropsychology, 14, 526536.
Banse, R., & Scherer, K. R. (1996). Acoustic profiles in
vocal emotion expression. Journal of Personality and Social Psychology, 70, 614636.
Barrash, J., Damasio, H., Adolphs, R., & Tranel, D. (2000).
The neuroanatomical correlates of route learning impairment. Neuropsychologia, 38, 820836.
Barrett, A. M., Crucian, G. P., Raymer, A. M., & Heilman,
K. M. (1999). Spared comprehension of emotional
prosody in a patient with global aphasia. Neuropsychiatry, Neuropsychology, and Behavioral Neurology, 12,
117120.
Beck, A. T. (1987). Beck Depression Inventory. San Antonio, TX: Psychological Corporation.
Behrens, S. J. (1985). The perception of stress and lateralization of prosody. Brain and Language, 26, 332348.
Belin, P., Zatorre, R. J., Lafaille, P., Ahad, P., & Pike, B.
(2000). Voice selective areas in human auditory cortex.
Nature, 403, 309312.
Blonder, L. X., Bowers, D., & Heilman, K. (1991). The role
of the right hemisphere in emotional communication.
Brain, 114, 11151127.
Blood, A. J., Zatorre, R. J., Bermudez, P., & Evans, A. C.
(1999). Emotional responses to pleasant and unpleasant
music correlate with activity in paralimbic brain regions.
Nature Neuroscience, 2, 382387.
Borod, J. (1992). Interhemispheric and intrahemispheric
control of emotion: A focus on unilateral brain damage.
Journal of Consulting and Clinical Psychology, 60, 339
348.
Borod, J. C., Obler, L. K., Erhan, H. M., Grunwald, I. S.,
Cicero, B. A., Welkowitz, J., et al. (1998). Right hemisphere emotional perception: Evidence across multiple
channels. Neuropsychology, 12, 446458.
Bowers, D., Bauer, R. M., & Heilman, K. M. (1993). The
nonverbal affect lexicon: Theoretical perspectives from
neuropsychological studies of affect perception. Neuropsychology, 7, 433444.
Bowers, D., Coslett, H. B., Bauer, R. M., Speedie, L. J., &
Heilman, K. H. (1987). Comprehension of emotional
prosody following unilateral hemispheric lesions: Pro-

cessing defect versus distraction defect. Neuropsychologia, 25, 317328.


Breitenstein, C., Daum, I., & Ackermann, H. (1998). Emotional processing following cortical and subcortical brain
damage: Contribution of the fronto-striatal circuitry. Behavioral Neurology, 11, 2942.
Broks, P., Young, A. W., Maratos, E. J., Coffey, P. J.,
Calder, A. J., Isaac, C., et al. (1998). Face processing
impairments after encephalitis: Amygdala damage and
recognition of fear. Neuropsychologia, 36, 5970.
Brownell, H. H., Michel, D., Powelson, J. A., & Gardner, H.
(1983). Surprise but not coherence: Sensitivity to verbal
humor in right hemisphere patients. Brain and Language,
18, 2027.
Bryden, M. P. (1982). Laterality. New York: Academic
Press.
Buccino, G., Binkofski, F., Fink, G. R., Fadiga, L., Fogassi,
L., Gallese, V. V., et al. (2001). Action observation activates premotor and parietal areas in a somatotopic manner: An fMRI study. European Journal of Neuroscience,
13, 400404.
Buchanan, T. W., Lutz, K., Mirzazade, S., Specht, K., Shah,
N. J., Zilles, K., & Jancke, L. (2000). Recognition of
emotional prosody and verbal components of spoken language: An fMRI study. Cognitive Brain Research, 9,
227238.
Burt, D. M., & Perrett, D. I. (1997). Perceptual asymmetries
in judgments of facial attractiveness, age, gender, speech
and expression. Neuropsychologia, 35, 685693.
Cancelliere, A. E. B., & Kertesz, A. (1990). Lesion localization in acquired deficits of emotional expression and
comprehension. Brain and Cognition, 13, 133147.
Canli, T. (1999). Hemispheric asymmetry in the experience
of emotion. The Neuroscientist, 5, 201207.
Carruthers, P., & Smith, P. K. (1996). Theories of theories
of mind. Cambridge, England: Cambridge University
Press.
Damasio, A. R. (1994). Descartes error: Emotion, reason,
and the human brain. New York: Grosset/Putnam.
Damasio, A. R. (1996). The somatic marker hypothesis and
the possible functions of the prefrontal cortex. Philosophical Transactions of the Royal Society of London, Series
B, 351, 14131420.
Damasio, H., & Frank, R. (1992). Three-dimensional in vivo
mapping of brain lesions in humans. Archives of Neurology, 49, 137143.
Damasio, H., Tranel, D., Adolphs, R., Grabowski, T., &
Damasio, A. (in press). Uncovering neural systems behind word and concept retrieval. Cognition.
Darby, D. G. (1993). Sensory aprosodiaA clinical clue to
lesions of the inferior division of the right middle cerebral
artery. Neurology, 43, 567572.

This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

NEURAL SYSTEMS FOR RECOGNITION OF EMOTIONAL PROSODY


Davidson, R. J. (1992). Anterior cerebral asymmetry and
the nature of emotion. Brain and Cognition, 6, 245268.
Davidson, R. J. (1993). Cerebral asymmetry and emotion:
Conceptual and methodological conundrums. Cognition
and Emotion, 7, 115138.
Davidson, R. J., & Hugdahl, K. (1995). Brain asymmetry.
Cambridge, MA: MIT Press.
Dimberg, U. (1982). Facial reactions to facial expressions.
Psychophysiology, 19, 643647.
Dimberg, U., Thunberg, M., & Elmehed, K. (2000). Unconscious facial reactions to emotional facial expressions.
Psychological Science, 11, 8689.
Ekman, P. (Ed.). (1973). Darwin and facial expression: A
century of research in review. New York: Academic
Press.
Ekman, P., & Davidson, R. J. (1993). Voluntary smiling
changes regional brain activity. Psychological Science, 4,
342345.
Ekman, P., & Friesen, W. V. (1975). Unmasking the face.
Englewood Cliffs, NJ: Prentice Hall.
Frank, R. J., Damasio, H., & Grabowski, T. J. (1997). Brainvox: An interactive, multi-modal visualization and analysis system for neuroanatomical imaging. NeuroImage, 5,
1330.
Frey, S., Kostopoulos, P., & Petrides, M. (2000). Orbitofrontal involvement in the processing of unpleasant auditory information. European Journal of Neuroscience,
12, 37093712.
Gallese, V., Fadiga, L., Fogassi, L., & Rizzolatti, G. (1996).
Action recognition in the premotor cortex. Brain, 119,
593609.
Gallese, V., & Goldman, A. (1999). Mirror neurons and the
simulation theory of mind-reading. Trends in Cognitive
Sciences, 2, 493500.
George, M. S., Parekh, P. I., Rosinsky, N., Ketter, T., Kimbrell, T. A., Heilman, K. H., et al. (1996). Understanding
emotional prosody activates right hemisphere regions.
Archives of Neurology, 53, 665670.
Goldman, A. (1992). In defense of the simulation theory.
Mind and Language, 7, 104119.
Greene, R. (1980). The MMPI: An interpretive manual.
New York: Grune & Stratton.
Hari, R., Forss, N., Avikainen, S., Kirveskari, E., Salenius,
S., & Rizzolatti, G. (1998). Activation of human primary
motor cortex during action observation: A neuromagnetic
study. Proceedings of the National Academy of Sciences,
USA, 95, 1506115065.
Heilman, K. M., Bowers, D., Speedie, L., & Coslett, H. B.
(1984). Comprehension of affective and nonaffective
prosody. Neurology, 34, 917921.
Heilman, K. M., Scholes, R., & Watson, R. T. (1975). Auditory affective agnosia: Disturbed comprehension of af-

49

fective speech. Journal of Neurology, Neurosurgery, and


Psychiatry, 38, 6972.
Hess, U., & Blairy, S. (2001). Facial mimicry and emotional
contagion to dynamic emotional facial expressions and
their influence on decoding accuracy. International Journal of Psychophysiology, 40, 129141.
Hornak, J., Rolls, E. T., & Wade, D. (1996). Face and voice
expression identification in patients with emotional and
behavioral changes following ventral frontal lobe damage. Neuropsychologia, 34, 247261.
Iacoboni, M., Woods, R. P., Brass, M., Bekkering, H., Mazziotta, J. C., & Rizzolatti, G. (1999, December). Cortical
mechanisms of human imitation. Science, 286, 2526
2528.
Imaizumi, S., Mori, K., Kiritani, S., Kwashima, R., Sugiura,
M., Fukuda, H., et al. (1997). Vocal identification of
speaker and emotion activates different brain regions.
NeuroReport, 8, 28092812.
Jaencke, L. (1994). An EMG investigation of the coactivation of facial muscles during the presentation of affectladen stimuli. Journal of Psychophysiology, 8, 110.
Jansari, A., Tranel, D., & Adolphs, R. (2000). A valencespecific lateral bias for discriminating emotional facial
expressions in free field. Cognition and Emotion, 14,
341353.
Kaplan, J. A., Brownell, H. H., Jacobs, J. R., & Gardner, H.
(1990). The effects of right hemisphere damage on the
pragmatic interpretation of conversational remarks. Brain
and Language, 38, 315333.
Levenson, R. W., Ekman, P., & Friesen, W. V. (1990).
Voluntary facial action generates emotion-specific autonomic nervous system activity. Psychophysiology, 27,
363384.
Liberman, A. M., Cooper, F. S., Shankweiler, D. P., &
Studdert-Kennedy, M. (1967). The perception of the
speech code. Psychological Review, 74, 431461.
Lieberman, P., & Michaels, S. B. (1962). Some aspects of
fundamental frequency and envelope amplitude as related
to the emotional content of speech. Journal of the Acoustical Society of America, 34, 922927.
Lipps, T. (1907). Psychologische Untersuchungen [Psychological investigations]. Leipzig, Germany: Engelman.
Matsumoto, D., & Kishimoto, H. (1983). Developmental
characteristics in judgments of emotion from nonverbal
vocal cues. International Journal of Intercultural Relations, 7, 415424.
Meltzoff, A. N., & Moore, M. K. (1977, October). Imitation
of facial and manual gestures by human neonates. Science, 198, 7478.
Morecraft, R. J., Geula, C., & Mesulam, M. M. (1992).
Cytoarchitecture and neural afferents of orbitofrontal cor-

This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

50

ADOLPHS, DAMASIO, AND TRANEL

tex in the brain of the monkey. Journal of Comparative


Neurology, 323, 341358.
Morris, J. S., Scott, S. K., & Dolan, R. J. (1999). Saying it
with feeling: Neural responses to emotional vocalizations. Neuropsychologia, 37, 11551163.
Morrow, L., Vrtunski, P. B., Kim, Y., & Boller, F. (1981).
Arousal responses to emotional stimuli and laterality of
lesion. Neuropsychologia, 19, 6571.
ngur, D., An, X., & Price, J. L. (1998). Prefrontal cortical
O
projections to the hypothalamus in macaque monkeys.
Journal of Comparative Neurology, 401, 480505.
Pell, M. D. (1998). Recognition of prosody following unilateral brain lesion: Influence of functional and structural
attributes of prosodic contours. Neuropsychologia, 36,
701715.
Pell, M. D., & Baum, S. R. (1997). Unilateral brain damage,
prosodic comprehension deficits, and the acoustic cues to
prosody. Brain and Language, 57, 195214.
Peper, M., & Irle, E. (1997a). Categorical and dimensional
decoding of emotional intonations in patients with focal
brain lesions. Brain and Language, 58, 233264.
Peper, M., & Irle, E. (1997b). The decoding of emotional
concepts in patients with focal cerebral lesions. Brain and
Cognition, 34, 360387.
Phillips, M. L., Young, A. W., Scott, S. K., Calder, A. J.,
Andrew, C., Giampietro, V., et al. (1998). Neural responses to facial and vocal expressions of fear and disgust. Proceedings of the Royal Society of London, Series
B, 265, 18091817.
Pihan, H. (1997). The cortical processing of perceived emotion: A DC-potential study on affective speech prosody.
NeuroReport, 8, 623627.
Rama, P., Martinkauppi, S., Linnankoski, I., Koivisto, J.,
Aronen, H. J., & Carlson, S. (2001). Working memory of
identification of emotional vocal expressions: An fMRI
study. NeuroImage, 13, 10901101.
Reuter-Lorentz, P., & Davidson, R. J. (1981). Differential
contributions of the two cerebral hemispheres to the perception of happy and sad faces. Neuropsychologia, 19,
609613.
Rizzolatti, G., Fadiga, L., Gallese, V., & Fogassi, L. (1996).
Premotor cortex and the recognition of motor actions.
Cognitive Brain Research, 3, 131141.
Ross, E. D. (1981). The aprosodias: Functionalanatomic
organization of the affective components of language in
the right hemisphere. Archives of Neurology, 38, 561
569.
Ross, E. D. (1985). Modulation of affect and nonverbal
communication by the right hemisphere. In M. -M. Mesulam (Ed.), Principles of behavioral neurology (pp.
239258). Philadelphia: F. A. Davis.
Ross, E. D., Stark, R. D., & Yenkosky, J. P. (1997). Later-

alization of affective prosody in brain and the callosal


integration of hemispheric language functions. Brain and
Language, 56, 2754.
Ryalls, J. (1988). Concerning right-hemisphere dominance
for affective language. Archives of Neurology, 45, 337
338.
Scherer, K. R. (1981). Speech and emotional states. In J. K.
Darby (Ed.), Speech evaluation in psychiatry (pp. 198
220). New York: Grune & Stratton.
Scherer, K. R. (1986). Vocal affect expression: A review
and a model for future research. Psychological Bulletin,
99, 143165.
Scherer, K. R. (1995). Expression of emotion in voice and
music. Journal of Voice, 9, 235248.
Scherer, K. R., Banse, R., & Wallbott, H. G. (2001). Emotion inferences from vocal expression correlate across
languages and cultures. Journal of Cross-Cultural Psychology, 32, 7692.
Scherer, K. R., Banse, R., Wallbott, H. G., & Goldbeck, T.
(1991). Vocal cues in emotion encoding and decoding.
Motivation and Emotion, 15, 123148.
Scherer, K. R., & Oshinsky, J. (1977). Cue utilization in
emotion attribution from auditory stimuli. Motivation and
Emotion, 1, 331346.
Schmitt, J. J., Hartje, W., & Willmes, K. (1997). Hemispheric asymmetry in the recognition of emotional attitude conveyed by facial expression, prosody and propositional speech. Cortex, 33, 6581.
Schneider, F., Gur, R. C., Gur, R. E., & Muenz, L. R.
(1994). Standardized mood induction with happy and sad
facial expressions. Psychiatry Research, 51, 1931.
Scott, S. K., Young, A. W., Calder, A. J., Hellawell, D. J.,
Aggleton, J. P., & Johnson, M. (1997). Impaired auditory
recognition of fear and anger following bilateral amygdala lesions. Nature, 385, 254257.
Shammi, P., & Stuss, D. T. (1999). Humour appreciation: A
role of the right frontal lobe. Brain, 122, 657666.
Starkstein, S. E., Federoff, J. P., Price, T. R., Leiguarda, R.
C., & Robinson, R. G. (1994). Neuropsychological and
neuroradiologic correlates of emotional prosody comprehension. Neurology, 44, 515522.
Strafella, A. P., & Paus, T. (2000). Modulation of cortical
excitability during action observation: A transcranial
magnetic stimulation study. Experimental Brain Research, 11, 22892292.
Tranel, D. (1996). The IowaBenton school of neuropsychological assessment. In I. Grant & K. M. Adams (Eds.),
Neuropsychological assessment of neuropsychiatric disorders (pp. 81101). New York: Oxford University Press.
Tranel, D., Adolphs, R., Damasio, H., & Damasio, A. R.
(2001). A neural basis for the retrieval of words for actions. Cognitive Neuropsychology, 18, 655670.

This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

NEURAL SYSTEMS FOR RECOGNITION OF EMOTIONAL PROSODY


Tranel, D., & Damasio, H. (1994). Neuroanatomical correlates of electrodermal skin conductance responses. Psychophysiology, 31, 427438.
van Bezooijen, R., Otto, S. A., & Heenan, T. A. (1983).
Recognition of vocal expressions of emotion. Journal of
Cross-Cultural Psychology, 14, 387406.
van Lancker, D., & Sidtis, J. J. (1992). The identification of
affectiveprosodic stimuli by left- and right-hemispheredamaged subjects: All errors are not created equal. Journal of Speech and Hearing Research, 35, 963970.
Wapner, W., Hamby, S., & Gardner, H. (1981). The role of
the right hemisphere in the apprehension of complex linguistic materials. Brain and Language, 14, 1533.
Wechsler, D. A. (1981). The Wechsler Adult Intelligence
ScaleRevised. New York: Psychological Corporation.
Wild, B., Erb, M., & Bartels, M. (2001). Are emotions

51

contagious? Evoked emotions while viewing emotionally


expressive faces: Quality, quantity, time course and gender differences. Psychiatry Research, 102, 109124
Young, A. W., Hellawell, D. J., Van de Wal, C., & Johnson,
M. (1996). Facial expression processing after amygdalotomy. Neuropsychologia, 34, 3139.
Zatorre, R. J., Evans, A. C., & Meyer, E. (1994). Neural
mechanisms underlying melodic perception and memory
for pitch. The Journal of Neuroscience, 14, 19081919.
Zoccolotti, P., Scabini, D., & Violani, C. (1982). Electrodermal responses in patients with unilateral brain damage. Journal of Clinical Neuropsychology, 4, 143150.

Received November 20, 2000


Revision received December 2, 2001
Accepted December 4, 2001

You might also like