Shumyk Ivanna, COa15-19, Seminar 1

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 19

THEORETICAL PHONETICS OF

ENGLISH
SHUMYK IVANNA
TABLE OF CONTENTS
Phonetics: the study of the sound system of

01 language. 05 Experimental phonetics and computational


linguistics. New speech transmission
technologies; machines with a voice control
operations; automatic machine translation.
02 Branches of Phonetics.

03
Phonology: the functional aspect of the sound
system of language. 06 Literature

04 Phonetics at the intersection of linguistic


studies. Research in communicative phonetics
and language teaching methodology.
PHONETICS: THE STUDY
OF THE SOUND SYSTEM
OF LANGUAGE.

Phonetics is a branch of linguistics that studies how humans


produce and perceive sounds, or in the case of sign languages,
the equivalent aspects of sign. Linguists who specialize in
studying the physical properties of speech are phoneticians. The
field of phonetics is traditionally divided into three sub-
disciplines based on the research questions involved such as
how humans plan and execute movements to produce speech
(articulatory phonetics), how various movements affect the
properties of the resulting sound (acoustic phonetics), or how
humans convert sound waves to linguistic information (auditory
phonetics). Traditionally, the minimal linguistic unit of phonetics
is the phone—a speech sound in a language which differs from
the phonological unit of phoneme; the phoneme is an abstract
categorization of phones.
Phonetics deals with two aspects of human speech: production
—the ways humans make sounds—and perception—the way
speech is understood. The communicative modality of a
language describes the method by which a language produces
and perceives languages. Languages with oral-aural modalities
such as English produce speech orally (using the mouth) and
perceive speech aurally (using the ears). Sign languages, such as
Australian Sign Language (Auslan) and American Sign Language
(ASL), have a manual-visual modality, producing speech
manually (using the hands) and perceiving speech visually (using
the eyes). ASL and some other sign languages have in addition a
manual-manual dialect for use in tactile signing by deafblind
speakers where signs are produced with the hands and
perceived with the hands as well.

Language production consists of several interdependent processes which transform a non-linguistic message into a
spoken or signed linguistic signal. After identifying a message to be linguistically encoded, a speaker must select the
individual words—known as lexical items—to represent that message in a process called lexical selection. During
phonological encoding, the mental representation of the words are assigned their phonological content as a
sequence of phonemes to be produced. The phonemes are specified for articulatory features which denote particular
goals such as closed lips or the tongue in a particular location. These phonemes are then coordinated into a
sequence of muscle commands that can be sent to the muscles, and when these commands are executed properly
the intended sounds are produced.
Language perception is the process by which a linguistic signal is
decoded and understood by a listener. In order to perceive speech
the continuous acoustic signal must be converted into discrete
linguistic units such as phonemes, morphemes, and words. In order
to correctly identify and categorize sounds, listeners prioritize
02 certain aspects of the signal that can reliably distinguish between
linguistic categories. While certain cues are prioritized over others,
many aspects of the signal can contribute to perception. For
example, though oral languages prioritize acoustic information, the
McGurk effect shows that visual information is used to distinguish
ambiguous information when the acoustic cues are unreliable.

These movements disrupt and modify an


airstream which results in a sound wave. The
modification is done by the articulators, with
different places and manners of articulation
producing different acoustic results. For
example, the words tack and sack both begin
with alveolar sounds in English, but differ in how
01 far the tongue is from the alveolar ridge. This
difference has large effects on the air stream
and thus the sound that is produced. Similarly,
the direction and source of the airstream can
affect the sound. The most common airstream
mechanism is pulmonic—using the lungs—but
the glottis and tongue can also be used to
produce airstreams.
BRANCHES OF

Modern phonetics has three branches

1 Articulatory phonetics, which 2 Acoustic phonetics, which 3 Auditory phonetics, which addresses
addresses the way sounds are made addresses the acoustic results the way listeners perceive and
with the articulators, of different articulations understand linguistic signals.
ARTICULATORY
PHONETICS The field of articulatory phonetics is a subfield of phonetics
that studies articulation and ways that humans produce
speech. Articulatory phoneticians explain how humans
produce speech sounds via the interaction of different
physiological structures. Generally, articulatory phonetics is
concerned with the transformation of aerodynamic energy
into acoustic energy. Aerodynamic energy refers to the
airflow through the vocal tract. Its potential form is air
pressure; its kinetic form is the actual dynamic airflow.
Acoustic energy is variation in the air pressure that can be
represented as sound waves, which are then perceived by
the human auditory system as sound.

Respiratory sounds can be produced by expelling air from the lungs. However, to vary the sound quality in a way useful for
speaking, two speech organs normally move towards each other to contact each other to create an obstruction that shapes
the air in a particular fashion. The point of maximum obstruction is called the place of articulation, and the way the obstruction
forms and releases is the manner of articulation. For example, when making a p sound, the lips come together tightly, blocking
the air momentarily and causing a buildup of air pressure. The lips then release suddenly, causing a burst of sound. The place
of articulation of this sound is therefore called bilabial, and the manner is called stop (also known as a plosive).
Acoustic phonetics is a subfield of phonetics, which deals with
acoustic aspects of speech sounds. Acoustic phonetics
ACOUSTIC investigates time domain features such as the mean squared
amplitude of a waveform, its duration, its fundamental
PHONETICS frequency, or frequency domain features such as the frequency
spectrum, or even combined spectrotemporal features and the
relationship of these properties to other branches of phonetics
The study of acoustic phonetics was greatly enhanced in the late 19th (e.g. articulatory or auditory phonetics), and to abstract
century by the invention of the Edison phonograph. The phonograph linguistic concepts such as phonemes, phrases, or utterances.
allowed the speech signal to be recorded and then later processed and
analyzed. By replaying the same speech signal from the phonograph
several times, filtering it each time with a different band-pass filter, a
spectrogram of the speech utterance could be built up. A series of
papers by Ludimar Hermann published in Pflügers Archiv in the last
two decades of the 19th century investigated the spectral properties of
vowels and consonants using the Edison phonograph, and it was in
these papers that the term formant was first introduced. Hermann also
played back vowel recordings made with the Edison phonograph at
different speeds to distinguish between Willis' and Wheatstone's
theories of vowel production.
AUDITORY
PHONETICS

Auditory phonetics is the branch of phonetics concerned with the hearing of


speech sounds and with speech perception. It thus entails the study of the
relationships between speech stimuli and a listener's responses to such stimuli as
mediated by mechanisms of the peripheral and central auditory systems,
including certain areas of the brain. It is said to compose one of the three main
branches of phonetics along with acoustic and articulatory phonetics,[1][2]
though with overlapping methods and questions.
PHONOLOGY: THE FUNCTIONAL
ASPECT OF THE SOUND SYSTEM
OF LANGUAGE.

Phonology is the branch of linguistics that studies how


languages or dialects systematically organize their sounds
or, for sign languages, their constituent parts of signs. The
term can also refer specifically to the sound or sign system
of a particular language variety.
Sign languages have a phonological system equivalent to
the system of sounds in spoken languages. The building
blocks of signs are specifications for movement, location,
and handshape.[2] At first, a separate terminology was used
for the study of sign phonology ('chereme' instead of
'phoneme', etc.), but the concepts are now considered to
apply universally to all human languages.
At one time, the study of phonology related only to the study of the systems of
phonemes in spoken languages, but may now relate to any linguistic analysis either:

1 at a level beneath the word (including


syllable, onset and rime, articulatory
gestures, articulatory features, mora,
etc.), or

2 all levels of language in which sound


or signs are structured to convey
linguistic meaning.[1]
In connected speech a sound is generally modified by its phonetic
environment, (i.e. by the neighboring sounds), by the position it occupies
in a word or an utterance; it is also modified by prosodic features, such
as stress, speech melody, & tempo of speech.
Compare / p / in "pill" (i.e. in initial position), in "spill" (i.e. after /s/), in
"slip" (i.e. in final position), in "slipper" (i.e. between vowels). These
various / p / sounds differ in manner of articulation or in acoustic
qualities. But they don't differ phonologically, if one of the various / p /
sounds are substituted for another, the meaning of the word will not
change. That’s why for the English speaking people it is of no linguistic
importance to discriminate various /p/ sounds. But it is linguistically
important for English speakers to discriminate between / p / & / b / (as in
"pill" and "bill") or / p / & / m / (as in "pill" & "mill"), though the differences
in their production might not be much more notable than the differences
in the production of the various /p/ sounds.
Every language has a limited number of sound types which are shared
by all the speakers of the language & are linguistically important bec.
they distinguish words in the language. In English there are 20 vowel
phonemes & 24 consonant phonemes.
All the actual speech sounds are allophones (or variants) of the
phonemes that exist in the language. Those that distinguish words, when
opposed to one another in the same phonetic position, are realizations of
different phonemes. E.g. /V/ & /W/ in English are realizations of 2
different phonemes bec. they distinguish such words as "vine" & "wine",
"veal" & "wheel" etc.
INTEGRATING PHONETICS AND
PHONOLOGY IN THE STUDY OF
LINGUISTIC PROMINENCE

Since the earliest days of phonetics, researchers have


noticed that certain speech units stand out from their
environment. This is for example the case of a stressed
syllable within a word, or of an accented word within an
utterance. These prominent units are usually produced with
more effort and perceived with more ease, and they often
represent a cornerstone for larger and meaningful
structures. The notion of linguistic prominence, however,
has received a wide number of contradicting or unspecific
definitions.
PAUL PASSY
Few concepts in phonetics and phonology research are
as widely used and as vaguely defined as is the notion of
prominence. At the crossroads of signal and structure, of
stress and accent, and of production and perception, the
notion of prominence has received a wide number of
contradicting or unspecific definitions. Wagner et al.
(2015) suggest that the most successful definitions are
the vaguest, such as the one proposed by Terken and
Hermes (2000): a linguistic entity is prosodically
prominent when it stands out from its environment by
virtue of its prosodic characteristics. This definition is
sufficiently generic, to the point that it can encompass a
wide number of phenomena and areas of research. For
The first insight offered by a bare-boned definition of
example, on the one hand, stressed syllables stand out
prominence is the emphasis on prominence as a relational
from the other syllables in a word by virtue of loudness
property. In order to define a unit as prominent, it is
and durational cues. On the other hand, constituents in
necessary to consider both the unit itself and the other
focus (as an information structural notion) stand out from
neighbouring units within the same domain. In his discussion
the other referents mentioned in an utterance by virtue
of breath groups, Passy (1906:25) states that the parts that
of pitch accent choice and placement.
compose it do not hit our ears with the same intensity (our
translations). Certain parts, notably certain syllables, are
said to be perceived more clearly than the others. Here,
Passy is defining a discrete linguistic entity of interest
(syllable), its environment (other syllables in a breath
group), and the mechanism for standing out (increased ease
of perception).
EXPERIMENTAL PHONETICS AND COMPUTATIONAL
LINGUISTICS. NEW SPEECH TRANSMISSION
TECHNOLOGIES; MACHINES WITH A VOICE CONTROL
OPERATIONS; AUTOMATIC MACHINE TRANSLATION.
Experimental phonetics employs the methods of investigation
commonly used in other disciplines—e.g., physics, physiology, and
psychology—for measuring the physical and physiological dimensions
of speech sounds and their perceptual characteristics. The sound
spectrograph and speech synthesizers were mentioned in the section
on acoustic phonetics. Other techniques include the use of X-rays; air-
pressure and air-flow recording; palatography, a method of registering
the contacts between the tongue and the roof of the mouth; and
cinematography. All of these techniques have been used for studying
the actions of the vocal organs.
Much of the work in experimental phonetics has been directed toward
obtaining more accurate descriptions of the sounds that characterize
different languages. There have also been several studies aimed at
determining the relative importance of different features in signalling
contrasts between sounds. But experimental phoneticians are
probably most concerned with trying to discover the central cerebral
processes involved in speech.
Voice controls have experienced an enormous spread in recent
MACHINES WITH A years. Especially the intelligent, personal assistants (built into
"Smart Speakers") are now available in many households.
VOICE CONTROL They require an internet connection and the processing of voice
data in a cloud which is operated by the manufacturer.
OPERATIONS
However, the use of voice controls in an industrial environment
poses special challenges. On the one hand, noisy ambient
conditions and background noise must be taken into account. On
the other hand, the use of cloud services is prohibited for
reasons of data protection, IT security or lack of networking. In
particular, the regulations of the DSGVO make the recording of
voice data by companies and the storage of this data in a cloud
appear critical.

The speech control is equipped with the same flexibility and


performance as cloud-based speech assistants. As embedded
voice control, vicCONTROL industrial is designed for ARMv7 and
X86 compatible platforms - due to the wide distribution of these
processors in devices and machine controls, it can be used
directly without additional hardware. The dialog design tool
vicSDC makes it easy to develop a customer-specific voice
control application.
AUTOMATIC The use of computers to translate text from one language to
another has long been a dream of computer science.
MACHINE Nevertheless, it’s only in the past 10 years that machine
translation has become a viable productivity tool in more
TRANSLATION widespread use. Advances in natural language processing,
artificial intelligence, and computing power all contribute to
this increasingly useful technology.

Machine translation is the process of


automatically translating content from
one language (the source) to another
(the target) without any human input.
Translation was one of the first
applications of computing power, starting
in the 1950s. Unfortunately, the
complexity of the task was far higher
than early computer scientists’ estimates,
requiring enormous data processing
power and storage far beyond the
capabilities of early machines.
LITERATURE

01 05
Abercrombie, D. (1967). Elements of General R. Wiese, in Encyclopedia of Language &
Phonetics. Edinburgh: Chicago, Aldine Pub. Co. Linguistics (Second Edition), 2006

H. van der Hulst, in Encyclopedia of Language & G. Graffi, in Encyclopedia of Language &
02 Linguistics (Second Edition), 2006 06 Linguistics (Second Edition), 2006

03 Joseph Jankovic MD, in Bradley and Daroff's


Neurology in Clinical Practice, 2022 07 International Phonetic Association (2015).
International Phonetic Alphabet. International
Phonetic Association.

04 Yost, William (2003). "Audition". In Alice F.


Healy; Robert W. Proctor (eds.). Handbook of 08 Bizzi, E.; Hogan, N.; Mussa-Ivaldi, F.; Giszter, S.
(1992). "Does the nervous system use
Psychology: Experimental psychology. John equilibrium-point control to guide single and
Wiley and Sons. p. 130. ISBN 978-0-471-39262- multiple joint movements?". Behavioral and
0. Brain Sciences. 15 (4): 603–13.
doi:10.1017/S0140525X00072538. PMID
23302290.
Thank you for attention

You might also like