Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 2

Intro: People do not process linguistic information in a linear fashion; they do not move from one

linguistic level to another. The research shows that in most situations, listeners and readers use a great
deal of information other than the actual language being produced to help them decipher the linguistic
symbols they hear or see. Understanding speech is known to be an active rather than a passive process,
in which hearers process and reconstruct the intended message, based on outline clues and their own
expectations.

The minimum unit of conscious speech comprehension is a word. When it comes to reading, however,
the operating units can be larger (word-combinations, syntagms, statements, paragraphs, or other
notionally complete textual fragments). At the level of meaning, speech comprehension implies the
probability prediction of both the semantic deployment of the text, and the grammar.

Comprehension of sounds:

In one experiment, a set of sentences was played to a group of listeners who were asked to write down
the sixth word in each of the following sentences.

In every case, the subjects heard eel as the key word in the sentence, but most of the subjects claimed
they had heard a different word for each example – specifically, wheel for (1), heel for (2), peel for (3),
and meal for (4). The insertion of a different missing sound (phoneme) to create a separate but
appropriate 'eel' word in each sentence is called the phoneme restoration effect. Listeners report what
they expected to hear from the context rather than what they hear. This experiment illustrated that:

 People do not necessarily hear each of the words spoken to them.


 Comprehension is strongly influenced by even the slightest of changes in discourse
 Comprehension is not a simple item-by-item analysis of words in a linear sequence. People
process chunks of information and sometimes wait to make decisions on what is comprehended
until much later in the sequence.

Although we do not hear vowels and consonants as isolated sounds, we can, with the help of machines,
measure acoustic information extremely precisely. Phoneticians have discovered that the main feature
which English speakers attend to, albeit unconsciously, is the Voice Onset Timing (VOT). The most
significant acoustic difference between English consonants like /b/ and /p/ is the length of time it takes
between the initial puff of air that begins these sounds, and the onset of voicing in the throat that
initiates any vowel sound which follows the consonants.

Native speakers do not acquire all of is acoustic information from direct experience with language, and
parents and caretakers do not provide explicit instruction on these matters. Humans are actually born
with the ability to focus in on VOT differences in the sounds they hear. Rather than perceiving VOT
contrasts as a continuum, people tend to categorize these minute phonetic differences in a non-
continual, binary fashion.

All of this has been documented in experiments where native English speakers listened to artificially
created sonant sounds with gradually lengthening VOTs and were asked to judge whether the syllables
they heard began with a voiced consonant (like /b/ which has a short VOT) and a voiceless (like /p/
which, as was just pointed out, has a VOT lag of about 50 milliseconds). When subjects heard sounds
with a VOT of about 25 milliseconds, about halfway between a /b/ and a /p/, they rarely judged the
sound to be 50% voiceless and 50% voiced, they classified it as one sound or the other. This
phenomenon is called categorical perception. Categorical perception and appears to qualify as one
aspect of universal grammar (UG). These experiments with VOT perception in human infants are one of
the few solid pieces of evidence we have that UG exists and that at least part of human language is
modular – that is, some parts of language reside in the mind or brain as an independent system or
module. However, for example, Thai children, which has three, not two, VOT consonantal contrasts,
grow up after years of exposure with the ability to make a three-way categorical split (between /b/, /p/
and /ph/).

The successful comprehension of speech sounds is, therefore, a combination of the innate ability to
recognize fine distinctions between speech sounds which all humans appear to possess, along with the
ability all learners have to adjust their acoustic categories to the parameters of the language, or
languages, they have been immersed in.

Comprehension of Words: Parallel Distributed Processing (PDP) model suggests that we use several
separate but simultaneous and parallel processes when we try to understand spoken or written
language. One explanation for how we access the words stored in our mental lexicon is the logogen
model of comprehension. When you hear a word in a conversation or see it on the printed page, you
stimulate an individual logogen, or lexical detection device, for that word. Logogens can be likened to
individual neurons in a gigantic neuronal network; if they are activated, they work in parallel and in
concert with many other logogens (or nerve cells) to create comprehension. High-frequency words are
represented by logogens with hair triggers; they are rapidly and frequently activated. Low-frequency
words have very high thresholds of activation and take longer to be incorporated into a system of
understanding.

You might also like