Professional Documents
Culture Documents
Segmental Phonology
Segmental Phonology
Phonology vs Phonetics
Phonology and Phonetics can be both generally be described as the study of speech
sounds. Phonetics is the general study of the characteristics of speech sounds. It describes
articulatory phonetics, which is the study of how speech sounds are made, or articulated .
Other areas of study are acoustic phonetics, which deals with the physical properties of
speech as sound waves in the air , and auditory phonetics (or perceptual phonetics) which
deals with the perception, via the ear, of speech sounds. Basically, Phonetics are the
mechanics on how sounds are articulated.
Aspects of Phonology
Since phonology is the study of sound system and patterns of a language and the set of
rules that govern the way they function, two aspects can be identified:
There are two ways in which we can transcribe speech Phonemic (phonological or
broad) transcription and Phonetic (or narrow) transcription. You are probably already
familiar with transcription: the conversion of recorded speech into text. Transcripts are used
to create subtitles, to make long recordings more searchable and accessible, or to simply
create another convenient format for the content at hand.
Phonetic and phonemic transcription, on the other hand, takes this conversion one
step farther: using a much more detailed writing system to represent the nuances of
pronunciation. This kind of transcription is used for academic research, or developing
technology. These transcription systems usually rely on the International Phonetic
Alphabet (IPA), an alphabet that resembles the Roman alphabet but is much more
rigorously designed. The IPA maps one sound to one symbol, and has dozens more “letters”
than the Roman alphabet as used in English.
This is what the International Phonetic Alphabet looks like:
Phonemic transcription
Involves representing speech using just a unique symbol for each phoneme of the
language. Note that phonemic transcription is placed between /forward slash brackets/.
When we transcribe phonemically, we are representing not actual sounds, but abstract
mental constructs. These are the categories of sound that speakers understand to be
‘sounds of their language’ or how people interpret such sounds.
It shows none of the details of the pronunciation that are predictable by phonological
rules. e.g. ‘fan’ /fæn/
Phonetic transcription
The other way we can transcribe speech is using phonetic transcription, also sometimes
known as ‘narrow’ transcription. This involves representing additional details about the
contextual variations in pronunciation that occur in normal speech. Phonetic transcriptions
provide more details on how the actual sounds are pronounced. Note that phonetic
transcription is placed between [square brackets]. When we transcribe phonetically, we are
representing not abstract mental constructs, but rather the actual sounds in terms of their
acoustic and articulatory properties.
In this example, the vowel æ becomes nasalized allophone that’s why it has a
diacritic.
e.g. ‘fan’ [f n]
Example:
Broad transcription Narrow transcription
• spat, pat /spæt/, /pæt/ [spæt], [pʰæt]
Phonemic and phonetic transcription both have their purposes. The goal of a phonemic
transcription is to record the ‘phonemes as mental categories’ that a speaker uses ,
rather than the actual spoken variants of those phonemes that are produced in the
context of a particular word. Phonetic transcription on the other hand specifies the finer
details of how sounds are actually made . For many practical purposes, a phonemic
(broad) transcription is sufficient which ignores all the detail of pronunciation.
Note: Many linguists prefer to use [], i.e. square brackets, even in fairly broad
transcriptions. On the other hand, / /, i.e. slashes, are often used in a loose way to indicate
a broad transcription without any specific commitment to the correctness of a certain
phonological analysis. Both kinds of transcription are very different from standard
orthographic transcription. This transcription uses the alphabet or writing system of the
given language (for example the Roman alphabet, Chinese characters, or the Arabic abjad).
Phoneme vs allophone
If one segment of a word is substituted with another, the meaning of the word will
change. When two segments can be used to distinguish the meaning of words, they are said
to be two different phonemes.
Phoneme is the smallest distinctive segment in a language . The term distinctiveness of
a phoneme means that these segments can distinguish words from one another: If in a
given word we replace one phoneme with another, we produce a different word , or a
meaningless combination of sounds.
Substituting one of the short vowels / ɪ, e, æ, ʌ, ɒ, ʊ/ for another in /p_t/ results in six
different words.
Examples:
Examples:
Contrastive Distribution
Two sounds are contrastive if when you interchange the two sounds, you also
change the word/meaning of the word being said.
Example:
[v] and [f] in English or the voiced and voiceless labiodental fricatives. They are contrastive
in English.
The initial sound of the word vase has a voiced labiodental fricative and if we
interchange that to the unvoiced/voiceless labiodental fricative [f], you get face. A complete
different word and has a different meaning from vase. This pair of words, vase and face are
called minimal pairs. A pair of words that only differ in one sound, which demonstrates
that a pair of sounds is contrastive.
Suppose you want to determine if two sounds, X and Y are contrastive. First, you have
to find a word containing X and substitute Y instead. Is the result a different word? Then
those are minimal pairs.
These words are minimal pairs. Both have three sounds and they differ in meaning.
[læp] ‘lap’ [khæp] ‘cap’
The example is not a minimal pair because the aspirated [pʰ] as in pat and the
unaspirated [p] as in spat are not contrastive in English. There is a distinction in sounds that
exist phonetically but they are not contrastive when it comes to distinguishing words in
English. If we interchange them, we will get ‘pæt’ which just sound like a variant of the
same word and not a different word.
Allophones
Example:
Therefore, a phoneme may have more than one realization. The different realizations of a
phoneme are called allophones. Allophone is a variant of a phoneme.
[ ] [æ] - Allophones
[ ] [æ]
In the example that we have, the nasalized vowel [ ] comes before the sounds [n], [m],
and [ŋ]. In the example, it looks like if a vowel comes before one of these consonants, the
vowel is nasalized. Otherwise, all of these examples, it’s not nasalized. So [ ] and [æ] are
in complementary distribution as there is a particular phonetic environment which
determines when you get the nasal [ ] and oral [æ].
Free variation
Are they contrastive? [p] and [p ]? No, they are not. It’s only contrastive if the two sounds
will produce a different meaning when interchanged. But in here, they’re the same words
whether released or unreleased. So this is not a contrastive distribution.
Therefore, these must be allophones of the same underlying phoneme [p]. Are they in
complementary distribution? No, you can’t identify a particular phonetic environment which
will cause a released or unreleased stop. It’s just sort of random. So, this is called a free
variation.
Neutralization
Example:
Unstable
Unbelievable
Unclear
In many cases, especially for faster speech, it actually becomes an [m] or a bilabial
nasal, and what you say is “umbelievable”’ not unbelievable. Similarly, the word unclear. Are
you really gonna say unclear with the alveolar nasal? Often, your gonna say [u ŋkhliɹ]. The
phoneme /n/ undergoes place assimilation. So it’s when the place of articulation changes to
match the place of articulation of the following consonants.
In this example, the phoneme /n/ might actually be realized as the sound /m/. Applying
this phonological rule results in a loss of phonemic distinction , which means that a contrast
that exists in the language is not utilized in order to differentiate words due to sound
change. For example, due to final-obstruent devoicing.
For English speakers, they would be able to distinguish the sounds of these words.
However, in German, they be falsely be understood as cap, seat, or dock. In German, this
phenomenon is called the Auslautverhärthung or Final Devoicing of Consonants . Germans
find it difficult to produce word final voiced consonant in English and they often say cap
instead of cab, seat instead of seed and dock instead of dog. Linguists say that the
opposition between the voiced and voiceless obstruent is neutralized in final position in
German. Neutralizations result from language-specific constraints on the combinability of
phonemes, it will be dealt in the report on phonotactics.
Morphophonology
Example:
Hoof-hooves
Knife- knives
Thief – thieves
The morphophoneme {F} would then have morphoallophones like [f] for singular and [v]
for plural of these words. Hence, the need to emphasize their relationship.
/Iz/ after sibilants and affrictaes (/s, z, ʃ, ʒ, tʃ,dʒ/) as in kisses, busses, wishes,
garages, batches,badges, etc.
/s/ after voiceless consonants which are not sibilants or affricates, as in cats, caps,
cliffs.
/z/ after voiced sounds which are not sibilants or africates. i.e. also after vowels as in
cabs, cads, doves, cans.