Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/356145435

Phonetics of Sign Language

Chapter · July 2020


DOI: 10.1093/acrefore/9780199384655.013.744

CITATIONS READS

0 122

1 author:

Martha E Tyrone
Long Island University
25 PUBLICATIONS 242 CITATIONS

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Coordination of manual gestures and speech gestures during production View project

All content following this page was uploaded by Martha E Tyrone on 11 November 2021.

The user has requested enhancement of the downloaded file.


Phonetics of Sign Language

Phonetics of Sign Language


Martha Tyrone
Subject: Phonetics/Phonology, Sign Languages, Biology of Language
Online Publication Date: Jul 2020 DOI: 10.1093/acrefore/9780199384655.013.744

Summary and Keywords

Sign phonetics is the study of how sign languages are produced and perceived, by native
as well as by non-native signers. Most research on sign phonetics has focused on Ameri­
can Sign Language (ASL), but there are many different sign languages around the world,
and several of these, including British Sign Language, Taiwan Sign Language, and Sign
Language of the Netherlands, have been studied at the level of phonetics. Sign phonetics
research can focus on individual lexical signs or on the movements of the nonmanual ar­
ticulators that accompany those signs. The production and perception of a sign language
can be influenced by phrase structure, linguistic register, the signer’s linguistic back­
ground, the visual perception mechanism, the anatomy and physiology of the hands and
arms, and many other factors. What sets sign phonetics apart from the phonetics of spo­
ken languages is that the two language modalities use different mechanisms of produc­
tion and perception, which could in turn result in structural differences between modali­
ties. Most studies of sign phonetics have been based on careful analyses of video data.
Some studies have collected kinematic limb movement data during signing and carried
out quantitative analyses of sign production related to, for example, signing rate, phonet­
ic environment, or phrase position. Similarly, studies of sign perception have recorded
participants’ ability to identify and discriminate signs, depending, for example, on slight
variations in the signs’ forms or differences in the participants’ language background.
Most sign phonetics research is quantitative and lab-based.

Keywords: signed language, kinematics, sign production, sign perception, language modality, ASL

Introduction
Sign phonetics is often discussed and analyzed with reference to sign phonology, both by
sign language researchers and by others. As is the case with speech research, phonetics
and phonology are related and interact with each other, but sign phonetics and sign
phonology have different areas of emphasis. Sign phonetics is related to the physical
forms of signs and their accompanying non-manual movements, whereas sign phonology
focuses on the features of signs that can create phonemic contrasts. Like spoken lan­
guages, signed languages are made up of sublexical elements that can be combined in
different ways to form lexical items. Stokoe identified three phonological parameters that
Page 1 of 14

PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, LINGUISTICS (oxfordre.com/linguistics). (c) Oxford University
Press USA, 2020. All Rights Reserved. Personal use only; commercial use is strictly prohibited (for details see Privacy Policy
and Legal Notice).

Subscriber: OUP-Reference Gratis Access; date: 16 August 2020


Phonetics of Sign Language

can differentiate signs in American Sign Language (ASL): handshape, movement, and lo­
cation (Stokoe, 1960). Handshape describes the configuration of the hands as a sign is
produced. Movement describes how the hands and arms move during a sign. Location de­
scribes where the hands are located during production of a sign. Battison (1978) later
added hand orientation—the direction that the palm faces during a sign—to this list. Lid­
dell and Johnson (1989) analyzed the structure of continuous signing and developed a
movement-hold model to parse sign boundaries. The physical form and perception of all
of these features have been analyzed by sign phoneticians.

In the sign modality as well as the speech modality, phonetics deals with research ques­
tions along the lines of:

• the relationship between the anatomy and physiology of the production system and
the physical forms of lexical items;
• the effects of phonetic context, prominence, phrase position, and production rate on
the realization of lexical items;
• variation in phonetic form as an effect of language background, gender, and age;
• language users’ perception of phonemic and subphonemic contrasts;
• the relationship between perception and production within and across individuals.

A defining feature of signed language is that it uses the hands and arms, rather than the
vocal tract, as its primary articulators. While the implications of this structural difference
between sign and speech have been discussed at length (Meier, 2002; Sandler, 1993),
there is more work to do in identifying the full range of differences between sign and
speech and in clarifying which of those are primarily the effect of the set of articulators
that the two language modalities use, rather than the effect of perception modality or of
abstract linguistic structure.

While the hands and arms are the primary articulators for signed language, movements
of the head, mouth, torso, eyebrows, eyes, and other secondary articulators show rule-
governed patterns during signing, which relate to syntax and prosody. These secondary
articulators and their movements are referred to as nonmanuals (Herrmann & Steinbach,
2013; Sutton-Spence & Boyes Braem, 2001). The nearest analogue to sign nonmanuals in
the speech modality is probably co-speech gesture, which is increasingly being analyzed
as it relates to prosody (Esteve-Gibert & Prieto, 2013; Krivokapic, Tiede & Tyrone, 2017;
Loehr, 2007). Only a small number of studies have looked at the phonetics of nonmanuals
(Udoff, 2014; Weast, 2008), or at the compensatory movements of the head and torso
which facilitate contact with the hand during signing (Mauk & Tyrone, 2012; Tyrone &
Mauk, 2016).

Another structural characteristic that is distinct to sign languages is the existence of fin­
gerspelling systems. Fingerspelling is a mechanism that allows sign languages to borrow
vocabulary from the written form of a spoken language. During fingerspelling, a signer
produces a sequence of hand configurations or hand configurations plus locations which

Page 2 of 14

PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, LINGUISTICS (oxfordre.com/linguistics). (c) Oxford University
Press USA, 2020. All Rights Reserved. Personal use only; commercial use is strictly prohibited (for details see Privacy Policy
and Legal Notice).

Subscriber: OUP-Reference Gratis Access; date: 16 August 2020


Phonetics of Sign Language

represent a sequence of written characters. Some sign languages use fingerspelling more
than others, and not all fingerspelling systems take the same physical form. For example,
even though ASL and British Sign Language (BSL) use fingerspelling to borrow words
from written English, ASL fingerspelling is one-handed, while BSL fingerspelling is two-
handed. Fingerspelling is interesting from the standpoint of phonetics, because finger­
spelling movements are smaller and more rapid than the movements in lexical signs, and
thus more prone to coarticulation.

Methods for Analyzing Sign Phonetics


The earliest sign phonetics research relied on citation forms of signs or on production da­
ta recorded with standard video. Since the 1990s, many studies have used instrumented
measures of sign production, such as motion capture or glove-based systems. These allow
considerably more measurement precision and experimental reliability. At the same time,
the equipment is usually expensive and requires substantial technical support. In addi­
tion, motion capture systems are generally not portable, so they can only be used in a lab­
oratory setting. This is likely one reason that only a few sign languages have been studied
closely at a phonetic level. Also, when sign production data is collected in a lab context,
there may be limitations to the naturalness of the signing represented. There has been a
recent return to video-based analyses of sign phonetics, because video can be used in a
broader range of settings, and because there has been an increase in video sampling rate
and an improvement in measures of movement speed and displacement, based on quanti­
tative voxel analyses (Karppa, Jantunen, Koskela, Laaksonen & Viitaniemi, 2011; Tkach­
man, Hall, Fuhrman & Aonuki, 2019).

A related area of research that has included instrumented measures of sign production is
the interface between sign language and human-computer interaction, with a focus on ei­
ther automated sign recognition and synthesis or on the improvement of communication
technologies for the deaf (Lu & Heuerfauth, 2009; Muir & Richardson, 2005; Vogler &
Metaxas, 2004). While these studies have collected very detailed phonetic data, their
broader objective is to identify the potential for meaningful contrasts between signs,
rather than to examine phonetic variation.

Another methodological approach is a qualitative analysis of the distribution of phonetic


forms in the lexicon as an effect of articulatory or perceptual constraints. For example, Si­
ple (1978) performed qualitative analyses of the distribution of different hand configura­
tions in ASL in relation to signs’ location. She observed that during signing, interlocutors
fixate on the signer’s face rather than tracking the movements of the signer’s hands. Si­
ple suggested that this was the reason why hand configurations are more varied when a
sign is produced on or near the face, given that those signs have greater visual salience
for the perceiver. Ann (2005) and Eccarius (2008) carried out similar analyses of Taiwan
Sign Language and American Sign Language, respectively. Both studies concluded that
the frequency of specific handshapes in those languages was largely determined by their
ease of articulation.

Page 3 of 14

PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, LINGUISTICS (oxfordre.com/linguistics). (c) Oxford University
Press USA, 2020. All Rights Reserved. Personal use only; commercial use is strictly prohibited (for details see Privacy Policy
and Legal Notice).

Subscriber: OUP-Reference Gratis Access; date: 16 August 2020


Phonetics of Sign Language

More recently, Sanders and Napoli (2016) carried out a cross-linguistic study of the distri­
bution of certain sign movements in Italian Sign Language (LIS), Al-Sayyid Bedouin Sign
Language (ABSL), and Sri Lankan Sign Language (SLSL). They examined the citation
forms of signs with simultaneous or alternating bimanual movements. Their findings sug­
gest that the movements that most often occur in these languages are the ones that do
not biomechanically destabilize the torso’s position. This adds to the literature consider­
ably, because most sign phonetics research has investigated either hand location or hand­
shape, rather than hand movement.

In summary, the field of sign phonetics has benefitted from a range of different method­
ological approaches; however, these methods have not been applied consistently across
different sign languages. Moreover, the different methods have their own advantages and
disadvantages, in terms of measurement precision, portability, and ease of use. These fac­
tors must be considered in comparing one sign phonetics study to another.

Relationship Between Sign Structure and


Anatomy/Physiology
Important early research in sign phonetics examined the anatomy of the human hand and
forearm in order to determine the inherent constraints on the formational structure of
signs. Mandel (1979, 1981) analyzed the handshapes used in ASL in light of the configu­
ration and function of the tendons in the forearms. His analysis suggests that the selected
finger in a sign’s hand configuration is unlikely to be only the middle finger or the ring
finger, because it is difficult to control the movements of these fingers given that they
share an extensor tendon with other fingers.

Similarly, Ann (1996) analyzed the anatomy of the hands, outlining the configuration of
the bones, muscles, tendons, and ligaments that allow or prevent the fingers’ movement.
Based on this analysis, she developed a scoring system for which sign language hand­
shape should be easiest or most difficult to produce. She then compared her calculated
scores to the distribution of those handshapes in ASL and in Taiwanese Sign Language. In
both languages, the handshapes that she identified as more difficult to produce occurred
less frequently in the lexicon.

Later research pursued the question of which joints are flexed/extended to produce a
sign. In one study of articulatory differences related to language experience, Mirus, Rath­
mann, and Meier (2001) examined non-signers’ imitations of signs produced in isolation
by a native signer, which were viewed on a video monitor. The researchers coded the data
descriptively from video and found that non-signers tended to initiate movement from
more proximal joints than the native signers who had served as models. Similarly,
Brentari, Poizner, and Kegl (1995) found that signers with Parkinson’s disease used more
distal joints to produce ASL signs, compared to the citation forms of those signs. More re­
cently, Eccarius, Bour, and Scheidt (2012) developed a detailed procedure for calibrating
data gloves in order to convert raw sensor data into useful joint angle information that

Page 4 of 14

PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, LINGUISTICS (oxfordre.com/linguistics). (c) Oxford University
Press USA, 2020. All Rights Reserved. Personal use only; commercial use is strictly prohibited (for details see Privacy Policy
and Legal Notice).

Subscriber: OUP-Reference Gratis Access; date: 16 August 2020


Phonetics of Sign Language

could be interpreted and analyzed across productions and across signers. This method
has enormous potential in allowing researchers to assess the reliability and generalizabil­
ity of their findings related to the phonetics of handshape in ASL and in other sign lan­
guages.

Many of the earliest studies to use instrumented data collection to analyze sign produc­
tion investigated differences between typical signing and signing that was disrupted by
neurological disorders such as stroke or Parkinson’s disease. Poizner, Klima, and Bellugi
(1987) were among the first researchers to use motion capture to analyze production of
ASL. They carried out a series of studies comparing the productions of ASL signers who
had aphasia, apraxia, or right hemisphere damage as a result of stroke. The goal of their
research was to determine whether aphasia would take a different articulatory form from
deficits in the production of meaningful gestures (apraxia) or from visuospatial deficits
caused by right hemisphere damage. Poizner, Bellugi, and Klima (1990) extended this line
of research to include signers with Parkinson’s disease, a disorder that is primarily mo­
toric rather than linguistic in nature. Similarly, Brentari et al. (1995) carried out an Opto­
trak study to compare an ASL signer with Parkinson’s disease to an ASL signer with apha­
sia. Using motion capture data, they were able to show that the signer with Parkinson’s
disease preserved linguistic contrasts in production but showed a deficit in the coordina­
tion of handshape and movement. By contrast, the errors produced by the signer with
aphasia were linguistic rather than motoric in nature. These findings illustrate that like
speech, sign language can break down at the level of motor control or sensory feedback,
as well as at the level of linguistic function. Moreover, the nature of the breakdown is
consistent with what would be predicted from analogous speech disorders (cf. Tyrone,
2014).

Some of the early sign language research on ASL, such as Stokoe (1960) and Klima and
Bellugi (1979), strongly emphasized the similarities between sign and speech. This was
an important first step in establishing sign language as a legitimate research topic in the
field of linguistics. After that initial parallel was drawn between sign and speech, sign
phoneticians were then able to explore the relationships between sign structure, limb bio­
mechanics, motor control, and cognitive function.

Prominence, Coarticulation, and Undershoot/


Reduction
Several studies in sign phonetics have examined the effects of linguistic prominence,
phrase position, and phonetic environment on the realization of signs. Wilbur and Schick
(1987) collected ASL data from video and analyzed the form of contrastive and emphatic
stress in semi-spontaneous signing. Their results suggested that stress was realized pri­
marily through the lengthening of sign duration and the raising of signs in the signing
space. Wilbur (1990) used a WATSMART system to examine the realization of stress in
ASL. (WATSMART was one of the earliest motion capture systems—its name is an
acronym for Waterloo Spatial Motion Analysis and Recording Technique.) Using this tech­
Page 5 of 14

PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, LINGUISTICS (oxfordre.com/linguistics). (c) Oxford University
Press USA, 2020. All Rights Reserved. Personal use only; commercial use is strictly prohibited (for details see Privacy Policy
and Legal Notice).

Subscriber: OUP-Reference Gratis Access; date: 16 August 2020


Phonetics of Sign Language

nique, she showed that native signers modified the duration of the movement transition
prior to a stressed sign, whereas non-native signers increased the hand’s displacement
during the stressed sign. This distinction between native and non-native stress patterns
would likely not be identifiable from descriptive analyses of video.

Multiple studies have investigated coarticulation and other effects of phonetic context in
signing and fingerspelling. Some of these studies have directly compared sign production
to speech or to sequences of non-linguistic limb movements, to tease apart the effects of
linguistic structure and of articulator size and physiology. Cheek (2001) examined coartic­
ulation of handshape in the production of ASL signs with the index finger extended (1-
handshapes) or with all the fingers extended (5-handshapes). Target signs with each of
those handshapes were embedded in carrier phrases so that they were preceded and fol­
lowed by the other handshape. She found variation in handshape that was rate-dependent
and consistent with models of coarticulation. In addition, she found both articipatory and
perseverative coarticulatory effects.

Grosvald and Corina (2012) used a motion capture system to examine linguistic and non-
linguistic coarticulation in ASL sign production, which they compared to coarticulation in
acoustic speech data in English. They examined not only the effects of adjacent signs on
the realization of location, but also the effects of signs that precede or follow the target
sign at a distance of up to three intervening signs. The researchers collected productions
of schwa vowels in English, which were embedded in carrier phrases, such that the target
vowel was at varying distances from the vowel /i/ or /a/. From these data, they measured
F1 and F2 for the target vowels to look for coarticulatory effects. Similarly, for the sign­
ing data, they used a motion capture system to collect productions of ASL signs located in
the neutral space in front of the body. Those target signs were embedded in carrier phras­
es which contained a sign located at the forehead or at the waist. The target neutral
space sign was placed at varying distances from the high or low sign, and the hand’s ver­
tical position was measured. To assess non-linguistic coarticulation, the researchers cued
signing participants to flip a switch that was either above or below the middle of the sign­
ing space during a signing task.

Grosvald and Corina found that coarticulatory effects were weaker in the sign modality
than in speech—in particular, distant speech segments had a stronger influence on vowel
formants than distant signs did on sign location. In addition, they found that in terms of
coarticulation, linguistic coarticulation patterned more like non-linguistic coarticulation
than coarticulation in speech.

Ormel, Crasborn, and van der Kooij (2013) used a data glove in conjunction with motion
capture to investigate coarticulatory effects of hand height in Sign Language of the
Netherlands (NGT). They found that hand height was influenced by the location of the
preceding and following signs, but that certain sign locations and certain forms of contact
were more subject to coarticulation than others. This research extends the earlier find­
ings by Grosvald and Corina (2012) and by Tyrone and Mauk (2010) by considering sign
locations that required contact with another articulator (such as the non-dominant hand).

Page 6 of 14

PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, LINGUISTICS (oxfordre.com/linguistics). (c) Oxford University
Press USA, 2020. All Rights Reserved. Personal use only; commercial use is strictly prohibited (for details see Privacy Policy
and Legal Notice).

Subscriber: OUP-Reference Gratis Access; date: 16 August 2020


Phonetics of Sign Language

Wilcox (1992) examined the production of ASL fingerspelling, using motion capture data
from the WATSMART system. His findings demonstrated that there was a large amount of
coarticulation in fingerspelling and that features from an individual letter in a finger­
spelling sequence would carry over into subsequent letters in the sequence. Moreover, he
proposed that the transitions between letters were important for comprehension of fin­
gerspelling in ASL. More recently, Keane (2014) carried out a study on handshape coar­
ticulation in ASL fingerspelling. He extended the earlier research by Wilcox and others by
collecting a large data sample using video and motion capture, and also by analyzing
coarticulation for what it reveals about the phonetics-phonology interface. Keane’s find­
ings are consistent with the early research on anatomy and physiology, which suggested
that extension of the pinky finger is easily realized because of the configuration of the
bones and tendons of the hand. He built upon the initial studies by Mandel and Ann, giv­
en that he was analyzing production data rather than citation forms of signs.

Mauk (2003) used a Vicon motion capture system to examine articulatory undershoot of
handshape and location in ASL (that is, when the articulators do not move far enough to
achieve handshape or location targets). He found that undershoot occurred in both of
these parameters as an effect of signing rate and phonetic environment. Similarly, Tyrone
and Mauk (2012) collected a larger data sample in order to investigate phonetic reduc­
tion in the realization of location in ASL. Like the earlier study by Mauk (2003), they
found that phonetic reduction in ASL occurred as an effect of factors that would be pre­
dicted from speech research. Their main result was that ASL signs with locations that are
high in the signing space tended to be lowered at faster signing rates and when they
were preceded or followed by a sign that was low in the signing space.

While Tyrone and Mauk (2012) examined the lowering of forehead-located signs in ASL in
lab settings, sociolinguists of ASL examined the same phenomenon in more naturalistic
contexts (Lucas, Bayley, Rose, and Wulf, 2002). Perhaps because of the difference in
methodology, Lucas et al. (2002) found limited effects of phonetic environment, while Ty­
rone and Mauk found strong effects of phonetic environment. Russell, Wilkinson, and
Janzen (2011) set out to investigate the seeming contradiction between findings from so­
ciolinguistic field work and experimental phonetic research on the lowering of signs that
are high in the signing space. In order to do so, they examined a set of videotaped corpus
data and looked at productions of signs located at the forehead, head, or neck in order to
determine whether these signs were lowered and to what extent. In addition, they investi­
gated whether lowering occurred in a manner that was gradient or categorical.

Russell et al. used a more controlled data collection procedure than earlier sociolinguistic
studies, but unlike the phonetic studies, their data still consisted of relatively naturalistic
conversation. They used standard video to record the conversations but worked to maxi­
mize the consistency and precision of the measurements. For instance, they implemented
a procedure for correcting the data for the position of the head and a procedure for nor­
malizing the data to allow for comparisons across signers. Like the earlier field studies,
Russell et al. found that the extent of lowering differed according to the grammatical cat­
egory of the sign that was lowered. In addition, they found that the signs that occurred

Page 7 of 14

PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, LINGUISTICS (oxfordre.com/linguistics). (c) Oxford University
Press USA, 2020. All Rights Reserved. Personal use only; commercial use is strictly prohibited (for details see Privacy Policy
and Legal Notice).

Subscriber: OUP-Reference Gratis Access; date: 16 August 2020


Phonetics of Sign Language

more often in the language, by their estimation, were lowered more often and to a
greater extent.

This body of research indicates that sign languages exhibit phonetic effects of context
and prominence, similarly to how spoken languages do. Indeed, a few of these studies
made direct comparisons between context effects in one modality as opposed to the oth­
er. Future research could probe this parallel further to try to explain the subtle differ­
ences in how context effects are manifested across modalities.

Sign Perception
Multiple studies of sign perception have investigated signers’ and non-signers’ perception
of the phonological parameters identified by Stokoe (1960). Most of these studies investi­
gated the possibility of categorical perception. Categorical perception is a phenomenon
that was documented for some speech sounds, and it refers to the idea that individuals
can only discriminate stimuli as well as they can identify or label them (Liberman, Harris,
Hoffman, & Griffith, 1957). This was first identified for perception of F2 transitions and
place of articulation in stop consonants. Listeners were incapable of discriminating differ­
ent tokens that were within the same identification category, as determined by partici­
pants’ performance on an identification task. Moreover, listeners showed their best dis­
crimination accuracy when two stimuli were on opposite sides of the identification bound­
ary. Not all phonemic contrasts are perceived categorically in spoken language, but the
possibility has been examined for multiple speech sound contrasts as well as multiple lan­
guages.

Emmorey, McCullough, and Brentari (2003) examined categorical perception for location
and handshape in deaf ASL signers and hearing non-signers. The two groups performed
similarly on the identification task, but only deaf signers showed categorical perception
for handshape, based on the position of the discrimination peak. Neither group showed
categorical perception for sign location. The stimuli were computer generated still im­
ages of ASL signs. Baker, Idsardi, Golinkoff, and Petitto (2005) carried out a similar study,
examining only handshape, and they found categorical perception for signers but not non-
signers, and greater within-category discrimination in non-signers.

Along similar lines, Best, Mathur, Miranda, and Lillo-Martin (2010) compared early and
late deaf ASL signers, late hearing signers, and hearing non-signers on identification and
discrimination of a handshape contrast. The stimuli were dynamic pseudosigns presented
on video. The researchers did not find categorical perception in any group, but language
experience affected participants’ ability to discriminate different handshapes.

Morford, Grieve-Smith, MacFarlane, Staley, and Waters (2008) looked at computer-gener­


ated dynamic sign stimuli and used handshape contrasts as well as location contrasts.
Their participants were deaf native signers, deaf non-native signers, and hearing non-na­
tive signers. They found that all groups performed similarly on identification, but only the

Page 8 of 14

PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, LINGUISTICS (oxfordre.com/linguistics). (c) Oxford University
Press USA, 2020. All Rights Reserved. Personal use only; commercial use is strictly prohibited (for details see Privacy Policy
and Legal Notice).

Subscriber: OUP-Reference Gratis Access; date: 16 August 2020


Phonetics of Sign Language

deaf participants showed statistically significant categorical perception, and the effect
was stronger in the native than in the non-native deaf signers.

A different type of perception study by Dye and Shih (2006) examined phonological prim­
ing in native and non-native deaf signers of BSL. Participants were asked to carry out a
lexical decision task after viewing an image of a sign that was related to the target. The
researchers found that phonological priming occurred for native signers when the prime
included the same location and movement as the target sign. Handshape had no effect,
and neither of the other two parameters had an effect in isolation. Non-native signers
completed the lexical decision task quickly and accurately, but they showed no effect of
priming. These findings are consistent with earlier research by Hildebrandt and Corina
(2002), which suggested that signers and non-signers view signs or pseudosigns as more
"similar" when they share a location and a movement than when they share other combi­
nations of structural features.

To date, there has been very little research on the interaction between perception and
production in the sign modality. The research thus far suggests that signers do not rely
heavily (if at all) on visual feedback of their own signing. Emmorey, Gertsberg, Korpics,
and Wright (2009) carried out a study in which they modified visual feedback during sign­
ing, so that signers’ view of the signing space was reduced or completely blocked. The re­
searchers found that signers did not alter their signing as an effect of either of these con­
ditions. They concluded that there is no sign equivalent to the Lombard effect, in which
hearing people speak louder when they have reduced auditory feedback. In a related
study, Emmorey, Bosworth, and Kraljic (2009) found that signers perform poorly at recog­
nizing signs viewed as if the signer were producing them. In addition, they showed that
signers were skilled at learning new signs from other sign languages, even when visual
feedback was limited or blurred. These findings are perhaps not surprising, given that
many signs are produced outside the central visual field of the signer, but spoken words
are not usually produced outside the auditory perception of a hearing speaker. In addi­
tion, there may be a different role for kinesthetic feedback in the two modalities.

Sign Phonetics, Sign Phonology, and Units of


Production/Perception
While spoken language phoneticians have debated whether it is the articulatory gestures
or the acoustic correlates of speech that are the units of speech perception and produc­
tion, no such debate has arisen in sign language research. No one has suggested that the
objects of sign perception are inherently hidden or difficult to access. It has been taken
as a given that signers directly perceive the articulatory gestures of signed language.
Therein lies the challenge for sign phonetics and phonology: what are the underlying
units in the sign modality and how do they differ (if at all) from what a human researcher
can observe and annotate? The visual nature of the sign modality can make it difficult for
us to appreciate the importance of non-contrastive variation in sign production, because
it seems that sign gestures should be transparent to any observer. As a result, there has
Page 9 of 14

PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, LINGUISTICS (oxfordre.com/linguistics). (c) Oxford University
Press USA, 2020. All Rights Reserved. Personal use only; commercial use is strictly prohibited (for details see Privacy Policy
and Legal Notice).

Subscriber: OUP-Reference Gratis Access; date: 16 August 2020


Phonetics of Sign Language

been only limited quantitative research in sign phonetics, and not much consideration
given to the interface (or distinction) between sign phonetics and sign phonology. (For a
recent discussion of these questions, see Brentari, 2019). The field of sign linguistics
would benefit tremendously if sign phonetics and phonology were clearly delineated and
could inform each other more explicitly.

Further Reading
Brentari, D. (2019). Sign language phonology. Cambridge: Cambridge University Press.

Tyrone, M. E. (2015). Instrumented measures of sign production and perception: Motion


capture, movement analysis, eye tracking, and reaction times. In E. Orfanidou, B. Woll, &
G. Morgan (eds.), Research Methods in Sign Language Studies: A Practical Guide (pp. 89–
104). Chichester, UK: Wiley Blackwell.

References
Ann, J. (1996). On the relation between the difficulty and the frequency of occurrence of
handshapes in two sign languages. Lingua, 98, 19–41.

Ann, J. (2005). A functional explanation of Taiwan Sign Language handshape frequency.


Language and Linguistics Taipei, 6(2), 217–246.

Baker, S. A., Idsardi, W. J., Golinkoff, R. M., & Petitto, L. A. (2005). The perception of
handshapes in American Sign Language. Memory & Cognition, 33(5), 887–904.

Battison, R. (1978). Lexical borrowing in American Sign Language. Silver Spring, MD:
Linstok Press.

Best, C. T., Mathur, G., Miranda, K. A., & Lillo-Martin, D. (2010). Effects of sign language
experience on categorical perception of dynamic ASL pseudosigns. Attention, Perception,
& Psychophysics, 72(3), 747–762.

Brentari, D., Poizner, H., & Kegl, J. (1995). Aphasic and Parkinsonian signing: Differences
in phonological disruption. Brain and Language, 48, 69–105.

Cheek, D. A. (2001). The phonetics and phonology of handshape in American Sign Lan­
guage (Unpublished doctoral dissertation). Department of Linguistics, University of Texas
at Austin.

Crasborn, O. (2001). Phonetic implementation of phonological categories in Sign Lan­


guage of the Netherlands. Utrecht, The Netherlands: LOT.

Dye, M. W. G. & Shih, S-I. (2006). Phonological priming in British Sign Language. In L.
Goldstein, D. H. Whalen, & C. T. Best (Eds.), Laboratory Phonology 8 (pp. 241–264).
Boston, MA: De Gruyter Mouton.

Page 10 of 14

PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, LINGUISTICS (oxfordre.com/linguistics). (c) Oxford University
Press USA, 2020. All Rights Reserved. Personal use only; commercial use is strictly prohibited (for details see Privacy Policy
and Legal Notice).

Subscriber: OUP-Reference Gratis Access; date: 16 August 2020


Phonetics of Sign Language

Eccarius, P. N. (2008). A constraint-based account of handshape contrast in sign lan­


guages (Unpublished doctoral dissertation). Purdue University.

Eccarius, P. N., Bour, R., & Scheidt, R. A. (2012). Dataglove measurement of joint angles
in sign language handshapes. Sign Language & Linguistics, 15(1), 39–72.

Emmorey, K., Bosworth, R., & Kraljic, T. (2009). Visual feedback and self-monitoring of
sign language. Journal of Memory and Language, 61(3), 398–411.

Emmorey, K., Gertsberg, N., Korpics, F., & Wright, C. E. (2009). The influence of visual
feedback and register changes on sign language production: A kinematic study with deaf
signers. Applied Psycholinguistics, 30, 187–203.

Emmorey, K., McCullough, S., & Brentari, D. (2003). Categorical perception in American
sign language. Language and Cognitive Processes, 18(1), 21–45.

Emmorey, K., Thompson, R., & Colvin, R. (2008). Eye gaze during comprehension of
American Sign Language by native and beginning signers. Journal of Deaf Studies and
Deaf Education, 14(2), 237–243.

Esteve-Gibert, N. & Prieto, P. (2013). Prosodic structure shapes the temporal realization
of intonation and manual gesture movements, Journal of Speech, Language, and Hearing
Research, 56, 850–864.

Grosvald, M. & Corina, D. P. (2012). Exploring the movement dynamics of manual and
oral articulation: Evidence from coarticulation. Laboratory Phonology, 3(1), 37–60.

Herrmann, A. & Steinbach, M. (2013). Nonmanuals in sign language. Amsterdam/


Philadelphia, PA: John Benjamins.

Hildebrandt, U. & Corina, D. (2002). Phonological similarity in American Sign Language,


Language and Cognitive Processes, 17(6), 593–612.

Karppa, M., Jantunen, T., Koskela, M., Laaksonen, J., & Viitaniemi, V. (2011). Method for
visualisation and analysis of hand and head movements in sign language video. In C.
Kirchhof, Z. Malisz, & P. Wagner (Eds.), Proceedings of the 2nd Gesture and Speech in In­
teraction Conference (GESPIN 2011), Bielefeld, Germany.

Keane, J. (2014). Towards an articulatory model of handshape: What fingerspelling tells


us about the phonetics and phonology of handshape in ASL (Unpublished doctoral disser­
tation). University of Chicago.

Klima, E. & Bellugi, U. (1979). The signs of language. Cambridge, MA: Harvard University
Press.

Krivokapic, J., Tiede, M. K., & Tyrone, M. E. (2017). A kinematic study of prosodic struc­
ture in articulatory and manual gestures: Results from a novel method of data collection.
Laboratory Phonology, 8(1), 3.

Page 11 of 14

PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, LINGUISTICS (oxfordre.com/linguistics). (c) Oxford University
Press USA, 2020. All Rights Reserved. Personal use only; commercial use is strictly prohibited (for details see Privacy Policy
and Legal Notice).

Subscriber: OUP-Reference Gratis Access; date: 16 August 2020


Phonetics of Sign Language

Liberman, A. M., Harris, K. S., Hoffman, H. S., & Griffith, B. C. (1957). The discrimination
of speech sounds within and across phoneme boundaries. Journal of Experimental Psy­
chology, 54, 358–368.

Liddell, S. K. & Johnson, R. E. (1989). American Sign Language: The phonological base.
Sign Language Studies, 64, 195–278.

Loehr, D. P. (2007). Aspects of rhythm in gesture and speech. Gesture, 7, 179–214.

Lu, P. & Huenerfauth, M. (2009). Accessible motion-capture glove calibration protocol for
recording sign language data from Deaf subjects. Proceedings of the 11th International
ACM SIGACCESS Conference on Computers and Accessibility (ASSETS 2009), New York,
NY: Association for Computing Machinery.

Lucas, C., Bayley, R., Rose, M., & Wulf, A. (2002). Location variation in American Sign
Language. Sign Language Studies, 2, 407–440.

Mandel, M. (1979). Natural constraints in sign language phonology: Data from anatomy.
Sign Language Studies, 24, 215–229.

Mandel, M. (1981). Phonotactics and morphophonology in American Sign Language (Un­


published doctoral dissertation). University of California at Berkeley.

Mauk, C. E. (2003). Undershoot in two modalities: Evidence from fast speech and fast
signing (Unpublished doctoral dissertation). University of Texas at Austin.

Mauk, C. E. & Tyrone, M. E. (2012). Location in ASL: Insights from phonetic variation.
Sign Language & Linguistics, 15(1), 128–146.

Meier, R. P. (2002). Why different, why the same? Explaining effects and non-effects of
modality upon linguistic structure in sign and speech. In R. P. Meier, K. Cormier, & D.
Quinto-Pozos (Eds.), Modality and structure in signed and spoken language (pp. 1–26).
New York, NY: Cambridge University Press.

Mirus, G., Rathmann, C., & Meier, R. P. (2001). Proximalization and distalization of sign
movement in adult learners. In V. Dively, M. Metzger, S. Taub, & A. M. Baer (Eds.), Signed
languages: Discoveries from international research (pp. 103–119). Washington, DC: Gal­
laudet University Press.

Morford, J. P., Grieve-Smith, A. B., MacFarlane, J., Staley, J., & Waters, G. (2008). Effects
of sign language experience on the perception of American Sign Language. Cognition,
109, 41–53.

Muir, L. J. & Richardson, I. E. G. (2005). Perception of sign language and its application to
visual communications for deaf people. Journal of Deaf Studies and Deaf Education, 10(4),
390–401.

Page 12 of 14

PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, LINGUISTICS (oxfordre.com/linguistics). (c) Oxford University
Press USA, 2020. All Rights Reserved. Personal use only; commercial use is strictly prohibited (for details see Privacy Policy
and Legal Notice).

Subscriber: OUP-Reference Gratis Access; date: 16 August 2020


Phonetics of Sign Language

Ormel, E., Crasborn, O., & van der Kooij, E. (2013). Coarticulation of hand height in Sign
Language of the Netherlands is affected by contact type. Journal of Phonetics, 41(3–4),
156–171.

Poizner, H., Bellugi, U., & Klima, E. (1990). Biological foundations of language: Clues
from sign language. Annual Review of Neuroscience, 13, 283–307.

Poizner, H., Klima, E., & Bellugi, U. (1987). What the hands reveal about the brain. Cam­
bridge, MA; MIT Press.

Russell, K., Wilkinson, E., & Janzen, T. (2011). ASL sign lowering as undershoot: A corpus
study. Laboratory Phonology 2. 403–422.

Sanders, N. & Napoli, D. J. (2016). Reactive effort as a factor that shapes sign language
lexicons. Language, 92(2), 275–297.

Sandler, W. (1993). A sonority cycle in American Sign Language. Phonology, 10, 243–279.

Siple, P. (1978). Visual constraints for sign language communication. Sign Language Stud­
ies, 19, 95–110.

Stokoe, W. C. (1960). Sign language structure: An outline of the visual communication


systems of the American Deaf. The Journal of Deaf Studies and Deaf Education, 10(1), 3–
37.

Sutton-Spence, R., & Boyes Braem, P. (2001). The hands are the head of the mouth: The
mouth as articulator in sign language. Hamburg, Germany: Signum-Verlag.

Tkachman, O., Hall, K. C., Fuhrman, R., & Aonuki, Y. (2019). Visible amplitude: Towards
quantifying prominence in sign language. Journal of Phonetics, 77, 100935.

Tyrone, M. E. (2014). Sign dysarthria: A speech disorder in signed language. In D. Quinto-


Pozos (Ed.). Multilingual aspects of signed language communication and disorder (pp.
162–185). Bristol, UK: Multilingual Matters.

Tyrone, M. E. & Mauk, C. E. (2010). Sign lowering and phonetic reduction in American
Sign Language. Journal of Phonetics, 38, 317–328.

Tyrone, M. E. & Mauk, C. E. (2012). Phonetic reduction and variation in American Sign
Language: A quantitative study of sign lowering. Laboratory Phonology, 3, 431–459.

Tyrone, M. E. & Mauk, C. E. (2016). The phonetics of head and body movement in the re­
alization of American Sign Language signs. Phonetica, 73, 120–140.

Tyrone, M. E., Kegl, J., and Poizner, H. (1999). Interarticulator co-ordination in Deaf sign­
ers with Parkinson’s disease. Neuropsychologia, 37(11), 1271–1283.

Page 13 of 14

PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, LINGUISTICS (oxfordre.com/linguistics). (c) Oxford University
Press USA, 2020. All Rights Reserved. Personal use only; commercial use is strictly prohibited (for details see Privacy Policy
and Legal Notice).

Subscriber: OUP-Reference Gratis Access; date: 16 August 2020


Phonetics of Sign Language

Tyrone, M. E. & Woll, B. (2008). Sign phonetics and the motor system: Implications from
Parkinson’s disease. In J. Quer (Ed.). Signs of the time: Selected papers from TISLR 2004
(pp. 43–68). Seedorf, Germany: Signum.

Udoff, J. A. (2014). Mouthings in American Sign Language: biomechanical and representa­


tional foundations (Unpublished doctoral dissertation). University of California at San
Diego.

Vogler, C. & Metaxas, D. (2004). Handshapes and movements: Multiple-channel ASL


recognition. Springer Lecture Notes in Artificial Intelligence 2915, pp. 247–258.

Weast, T. P. (2008). Questions in American Sign Language: A quantitative analysis of


raised and lowered eyebrows (Unpublished doctoral dissertation). University of Texas at
Arlington.

Wilbur, R. B. (1990). An experimental investigation of stressed sign production. Interna­


tional Journal of Sign Language, 1(1), 41–59.

Wilbur, R. B. & Schick, B. S. (1987). The effects of linguistic stress on ASL signs. Lan­
guage and Speech, 30(4), 301–323.

Wilcox, S. (1992). The phonetics of fingerspelling. Amsterdam/Philadelphia, PA: John Ben­


jamins.

Martha Tyrone

Long Island University

Page 14 of 14

PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, LINGUISTICS (oxfordre.com/linguistics). (c) Oxford University
Press USA, 2020. All Rights Reserved. Personal use only; commercial use is strictly prohibited (for details see Privacy Policy
and Legal Notice).

Subscriber: OUP-Reference Gratis Access; date: 16 August 2020

View publication stats

You might also like