Applied Linguistics

You might also like

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 27

Applied Linguistics Mr.

BAGHDADI Mounir

Applied linguistics is an interdisciplinary field of study that identifies, investigates, and offers solutions to language-related
real-life problems. Some of the academic fields related to applied linguistics are education, linguistics, psychology,
anthropology, and sociology.

Linguistics

Theoretical linguistics

Generative linguistics · Phonology


Morphology · Syntax · Lexis
Semantics · Pragmatics

Descriptive linguistics

Comparative linguistics ·
Etymology
Historical linguistics · Phonetics
Sociolinguistics

Applied linguistics

Cognitive linguistics
Computational linguistics
Forensic linguistics
Language acquisition
Language assessment
Language development
Language education
Linguistic prescription
Linguistic anthropology
Neurolinguistics
Psycholinguistics
Stylistics

Domain

Major branches of applied linguistics include bilingualism and multilingualism, computer-mediated communication (CMC),
conversation analysis, contrastive linguistics, language assessment, literacies, discourse analysis, language pedagogy, second
language acquisition, lexicography, language planning and policies, pragmatics, forensic linguistics, and translation.

Major journals of the field include "Annual Review of Applied Linguistics", "Applied Linguistics", "International Review of
Applied Linguistics", "International Journal of Applied Linguistics", "Issues in Applied Linguistics", and "Language
Learning".

1
History

The tradition of applied linguistics established itself in part as a response to the narrowing of focus in linguistics with the
advent in the late 1950s of generative linguistics, and has always maintained a socially accountable role, demonstrated by its
central interest in language problems.[1]

Although the field of applied linguistics started from Europe and the United States, the field rapidly flourished in the
international context.

United States

Although it is not clear when the field of applied linguistics began, the first issue of "Language Learning: A Journal of
Applied Linguistics" was published from the University of Michigan in 1948. Applied linguistics first concerned itself with
principles and practices on the basis of linguistics. In the early days, applied linguistics was thought as “linguistics-applied”
at least from the outside of the field. In the 1960s, however, applied linguistics was expanded to include language assessment,
language policy, and second language acquisition. As early as the 1970s, applied linguistics became a problem-driven field
rather than theoretical linguistics. Applied linguistics also included solution of language-related problems in the real world.
By the 1990s, applied linguistics has broadened including critical studies and multilingualism. Research of applied linguistics
was shifted to "the theoretical and empirical investigation of real world problems in which language is a central issue." [2]

United Kingdom

The British Association of Applied Linguistics (BAAL) was established in 1967. Its mission is "the advancement of
education by fostering and promoting, by any lawful charitable means, the study of language use, language acquisition and
language teaching and the fostering of interdisciplinary collaboration in this study [...]" [1]

Australia

Australian applied linguistics took as its target the applied linguistics of mother tongue teaching and teaching English to
immigrants. The Australia tradition shows a strong influence of continental Europe and of the USA, rather than of Britain [3].
Applied Linguistics of Association of Australia (ALAA) was established at a national congress of applied linguists held in
August 1976. [2]

Japan

In 1982, the Japan Association of Applied Linguistics (JAAL) was established in the Japan Association of College English
Teachers (JACET) in order to engage in activities on a more international scale. In 1984, JAAL became an affiliate of the
International Association of Applied Linguistics (AILA).[3]

****************************************************************************************************

Cognitive linguistics
In linguistics and cognitive science, cognitive linguistics (CL) refers to the school of linguistics that understands language
creation, learning, and usage as best explained by reference to human cognition in general. It is characterized by adherence
to three central positions. First, it denies that there is an autonomous linguistic faculty in the mind; second, it understands
grammar in terms of conceptualization; and third, it claims that knowledge of language arises out of language use.[1]

Cognitive linguists deny that the mind has any module for language-acquisition that is unique and autonomous. This stands in
contrast to the work done in the field of generative grammar. Although cognitive linguists do not necessarily deny that part of
the human linguistic ability is innate, they deny that it is separate from the rest of cognition. Thus, they argue that knowledge
of linguistic phenomena — i.e., phonemes, morphemes, and syntax — is essentially conceptual in nature. Moreover, they
argue that the storage and retrieval of linguistic data is not significantly different from the storage and retrieval of other
knowledge, and use of language in understanding employs similar cognitive abilities as used in other non-linguistic tasks.

Departing from the tradition of truth-conditional semantics, cognitive linguists view meaning in terms of conceptualization.
Instead of viewing meaning in terms of models of the world, they view it in terms of mental spaces.

2
Finally, cognitive linguistics argues that language is both embodied and situated in a specific environment. This can be
considered a moderate offshoot of the Sapir-Whorf hypothesis, in that language and cognition mutually influence one
another, and are both embedded in the experiences and environments of its users.

Areas of study

Cognitive linguistics is divided into three main areas of study:

 Cognitive semantics, dealing mainly with lexical semantics


 Cognitive approaches to grammar, dealing mainly with syntax, morphology and other traditionally more grammar-
oriented areas.
 Cognitive phonology.

Aspects of cognition that are of interest to cognitive linguists include:

 Construction grammar and cognitive grammar.


 Conceptual metaphor and conceptual blending.
 Image schemas and force dynamics.
 Conceptual organization: Categorization, Metonymy, Frame semantics, and Iconicity.
 Construal and Subjectivity.
 Gesture and sign language.
 Linguistic relativity.
 Cognitive neuroscience.

Related work that interfaces with many of the above themes:

 Computational models of metaphor and language acquisition.


 Psycholinguistics research.
 Conceptual semantics, pursued by generative linguist Ray Jackendoff is related because of its active psychological
realism and the incorporation of prototype structure and images.

Cognitive linguistics, more than generative linguistics, seeks to mesh together these findings into a coherent whole. A further
complication arises because the terminology of cognitive linguistics is not entirely stable, both because it is a relatively new
field and because it interfaces with a number of other disciplines.

Insights and developments from cognitive linguistics are becoming accepted ways of analysing literary texts, too. Cognitive
Poetics, as it has become known, has become an important part of modern stylistics. The best summary of the discipline as it
is currently stands is Peter Stockwell's Cognitive Poetics.[2]

****************************************************************************************************

Computational linguistics
Computational linguistics is an interdisciplinary field dealing with the statistical and/or rule-based modeling of natural
language from a computational perspective. This modeling is not limited to any particular field of linguistics. Traditionally,
computational linguistics was usually performed by computer scientists who had specialized in the application of computers
to the processing of a natural language. Computational linguists often work as members of interdisciplinary teams, including
linguists (specifically trained in linguistics), language experts (persons with some level of ability in the languages relevant to
a given project), and computer scientists. In general, computational linguistics draws upon the involvement of linguists,
computer scientists, experts in artificial intelligence, mathematicians, logicians, cognitive scientists, cognitive psychologists,
psycholinguists, anthropologists and neuroscientists, amongst others.

Origins

Computational linguistics as a field predates artificial intelligence, a field under which it is often grouped. Computational
linguistics originated with efforts in the United States in the 1950s to use computers to automatically translate texts from
foreign languages, particularly Russian scientific journals, into English. [1] Since computers had proven their ability to do
arithmetic much faster and more accurately than humans, it was thought to be only a short matter of time before the technical
details could be taken care of that would allow them the same remarkable capacity to process language. [citation needed]

3
When machine translation (also known as mechanical translation) failed to yield accurate translations right away, automated
processing of human languages was recognized as far more complex than had originally been assumed. Computational
linguistics was born as the name of the new field of study devoted to developing algorithms and software for intelligently
processing language data. When artificial intelligence came into existence in the 1960s, the field of computational linguistics
became that sub-division of artificial intelligence dealing with human-level comprehension and production of natural
languages.[citation needed]

In order to translate one language into another, it was observed that one had to understand the grammar of both languages,
including both morphology (the grammar of word forms) and syntax (the grammar of sentence structure). In order to
understand syntax, one had to also understand the semantics and the lexicon (or 'vocabulary'), and even to understand
something of the pragmatics of language use. Thus, what started as an effort to translate between languages evolved into an
entire discipline devoted to understanding how to represent and process natural languages using computers. [citation needed]

Subfields

Computational linguistics can be divided into major areas depending upon the medium of the language being processed,
whether spoken or textual; and upon the task being performed, whether analyzing language (recognition) or synthesizing
language (generation).

Speech recognition and speech synthesis deal with how spoken language can be understood or created using computers.
Parsing and generation are sub-divisions of computational linguistics dealing respectively with taking language apart and
putting it together. Machine translation remains the sub-division of computational linguistics dealing with having computers
translate between languages.

Some of the areas of research that are studied by computational linguistics include:

 Computational complexity of natural language, largely modeled on automata theory, with the application of context-
sensitive grammar and linearly-bounded Turing machines.
 Computational semantics comprises defining suitable logics for linguistic meaning representation, automatically
constructing them and reasoning with them
 Computer-aided corpus linguistics
 Design of parsers or chunkers for natural languages
 Design of taggers like POS-taggers (part-of-speech taggers)
 Machine translation as one of the earliest and least successful applications of computational linguistics draws on
many subfields.

The Association for Computational Linguistics defines computational linguistics as:

...the scientific study of language from a computational perspective. Computational linguists are interested in
providing computational models of various kinds of linguistic phenomena.

****************************************************************************************************

Language acquisition
Language acquisition is the study of the processes through which humans acquire language. By itself, language acquisition
refers to first language acquisition, which studies infants' acquisition of their native language, whereas second language
acquisition deals with acquisition of additional languages in both children and adults.

The process of language acquisition is among the leading aspects that distinguishes humans from other organisms. While
many forms of animal language exist, production is often fixed and does not vary much across cultural groups, though
comprehension may be more flexible (primates may learn to pick up bird signals) [1] The complexity and referential richness
and social contextual variation of human language is not exhibited by any other species.

Early Views on Language Acquisition

One of the complexities of acquiring language is that it is learned by infants from what appears to be very little input. This
has led to a long standing debate - on whether the child is born with some idea of meanings, or whether these are learned
based on social convention.

4
Plato felt that the word-meaning mapping in some form was innate. Sanskrit grammarians debated over twelve centuries
whether meaning was god-given (possibly innate) or was learned from older convention - e.g. a child learning the word for
cow by listening to trusted speakers talking about cows[2].

In modern times, empiricists like Hobbes and Locke argued that knowledge (and for Locke, language) emerge ultimately
from abstracted sense impressions. This led to Carnap's Aufbau, an attempt to learn all knowledge from sense datum, using
the notion of "remembered as similar" to bind these into clusters, which would eventually map to language.

Under Behaviorism, it was argued that language may be learned through a form of operant conditioning. In B.F. Skinner's
Verbal Behaviour (1957), he suggested that the successful use of a sign such as a word or lexical unit, given a certain
stimulus, reinforces its "momentary" or contextual probability.

Generative tradition and a return to nativism

This behaviourist idea was viciously attacked by Noam Chomsky in a "review" article in 1959, calling it "largely mythology"
and a "serious delusion"[3]. Instead, Chomsky argued for a more theoretical approach, based on a study of syntax. Chomsky's
generative grammar ignored semantics and language use, focusing on the set of rules that would generate syntactically
correct strings. This led to a model of acquisition which attempted to discover grammar from examples of well formed
sentences, ignoring its semantics or context.

However, it turns out that infinitely many rule-sets or grammars can explain the data [4], so discovering one was very difficult.
Indeed, trained linguists working for decades have not been able to identify a grammar for any human language [5]. Also, the
input available to the child learner was deemed insufficient (the poverty of stimulus argument). These aspects led Chomsky,
Jerry Fodor, Eric Lenneberg and others to suggest that some form of grammar must be innate (the nativist position)[6].

What is innate was claimed to be an universal grammar, initially connected to a organ called the language acquisition device
(LAD)[7]. Subsequently, the word organ was replaced by the phrase "language faculty" and Chomsky suggested that what was
universal across all languages were a set of principles, that were modified for each particular language by a set of parameters.

Nativists view that there are some "hidden assumptions" or biases[8] that allow children to quickly figure out what is and isn't
possible in the grammar of their native language, and allow them to master that grammar by the age of three. [9]

Empiricist views (opposing nativism)

Since 1980, linguists studying children such as Melissa Bowerman, and psychologists following Piaget like Elizabeth Bates
and Jean Mandler, came to suspect that there may be many learning processes going into the acquisition process, and that
ignoring the role of learning may have been a mistake.

In recent years, opposition to the nativist position has multiplied. The debate has centered on whether the inborn capabilities
are language-specific or domain-general, such as those that enable the infant to visually make sense of the world in terms of
objects and actions. The anti-nativist view has many strands, but a frequent theme is that language emerges from usage in
social contexts, using learning mechanisms that are a part of a general cognitive learning apparatus (which is what is innate).
This position has been championed by Elizabeth Bates[10], Catherine Snow, Brian MacWhinney, Michael Tomasello[1],
William O'Grady[11], Michael Ramscar[12] and others. Philosophers, such as Fiona Cowie and Barbara Scholz (with Geoffrey
Pullum)[13] have also argued against certain nativist claims in support of empiricism.

Empiricist theories

Empiricist theories of language acquisition include statistical learning theories of language acquisition, Relational Frame
Theory, functionalist linguistics, usage-based language acquisition, social interactionism (see also social interactionist theory)
and others.

Statistical learning theories of language acquisition

Some language acquisition researchers, such as Elissa Newport, Richard Aslin, and Jenny Saffran, believe that language
acquisition is based primarily on general learning mechanisms, namely statistical learning. The development of connectionist
models that are able to successfully learn words and syntactical conventions [14] supports the predictions of statistical learning
theories of language acquisition, as do empirical studies of children's learning of words and syntax. [15]

5
Chunking theories of language acquisition

Chunking theories of language acquisition constitute a group of theories related to statistical learning theories in that they
assume that the input from the environment plays an essential role; however, they postulate different learning mechanisms.
The central idea of these theories is that language development occurs through the incremental acquisition of “chunks”
(chunks) of elementary constituents, which can be words, phonemes, or syllables. Recently, this approach has been highly
successful in simulating several phenomena in the acquisition of syntactic categories [16] and the acquisition of phonological
knowledge [17]. The approach has several features that make it unique: the models are implemented as computer programs,
which enables clear-cut and quantitative predictions to be made; they learn from naturalistic input, made of actual child-
directed utterances; they produce actual utterances, which can be compared with children’s utterances; and they have
simulated phenomena in several languages, including English, Spanish, and German.

Relational Frame Theory

Relational Frame Theory (Hayes, Barnes-Holmes, Roche, 2001), provides a wholly selectionist/learning account of the origin
and development of language competence and complexity. Based upon the principles of Skinnerian behaviorism, RFT posits
that children acquire language purely through interacting with the environment. RFT theorists introduced the concept of
functional contextualism in language learning, which emphasizes the importance of predicting and influencing psychological
events, such as thoughts, feelings, and behaviors, by focusing on manipulable variables in their context. RFT distinguishes
itself from Skinner's work by identifying and defining a particular type of operant conditioning known as derived relational
responding, a learning process that to date appears to occur only in humans possessing a capacity for language. Empirical
studies supporting the predictions of RFT suggest that children learn language via a system of inherent reinforcements,
challenging the view that language acquisition is based upon innate, language-specific cognitive capacities. [18]

Emergentist theories

Emergentist theories, such as MacWhinney's Competition Model, posit that language acquisition is a cognitive process that
emerges from the interaction of biological pressures and the environment. According to these theories, neither nature nor
nurture alone is sufficient to trigger language learning; both of these influences must work together in order to allow children
to acquire a language. The proponents of these theories argue that general cognitive processes subserve language acquisition
and that the end result of these processes is language-specific phenomena, such as word learning and grammar acquisition.
The findings of many empirical studies support the predictions of these theories, suggesting that language acquisition is a
more complex process than many believe.[19]

Criticism of nativist theories

Many criticisms of the basic assumptions of generative theory have been put forth, with little response from its champions.
The concept of a Language Acquisition Device (LAD) is unsupported by evolutionary anthropology, which shows a gradual
adaptation of the human body to the use of language, rather than a sudden appearance of a complete set of binary parameters
(which are common to digital computers but not to neurological systems such as a human brain) delineating the whole
spectrum of possible grammars ever to have existed and ever to exist.

The theory has several hypothetical constructs, such as movement, empty categories, complex underlying structures, and
strict binary branching, that cannot possibly be acquired from any amount of input. Since the theory is, in essence,
unlearnably complex, then it must be innate. A different theory of language, however, may yield different conclusions.
Examples of alternative theories that do not utilize movement and empty categories are head-driven phrase structure
grammar, lexical functional grammar, and several varieties of construction grammar. While all theories of language
acquisition posit some degree of innateness, a less convoluted theory might involve less innate structure and more learning.
Under such a theory of grammar, the input, combined with both general and language-specific learning capacities, might be
sufficient for acquisition.

Other evidence supporting the nativist position

Creolization

More support for the innateness of language comes from the deaf population of Nicaragua. Until approximately 1986,
Nicaragua had neither education nor a formalized sign language for the deaf. As Nicaraguans attempted to rectify the
situation, they discovered that children past a certain age had difficulty learning any language. Additionally, the adults
observed that the younger children were using gestures unknown to them to communicate with each other. They invited Judy
Kegl, an American linguist from MIT, to help unravel this mystery. Kegl discovered that these children had developed their
own, distinct, Nicaraguan Sign Language with its own rules of "sign-phonology" and syntax. She also discovered some 300
adults who, despite being raised in otherwise healthy environments, had never acquired language, and turned out to be

6
incapable of learning language in any meaningful sense. While it was possible to teach vocabulary, these individuals were
unable to learn syntax.[9]

Derek Bickerton's (1981) landmark work with Hawaiian pidgin speakers studied immigrant populations where first-
generation parents spoke highly-ungrammatical "pidgin English". Their children, Bickerton found, grew up speaking a
grammatically rich language -- neither English nor the syntax-less pidgin of their parents. Furthermore, the language
exhibited many of the underlying grammatical features of many other natural languages. The language became "creolized",
and is known as Hawaii Creole English. This was taken as powerful evidence for children's innate grammar module.

Evolution of language

Debate within the nativist position now revolves around how language evolved. Derek Bickerton suggests a single mutation,
a "big bang", linked together previously evolved traits into full language.[20] Others like Steven Pinker argue for a slower
evolution over longer periods of time.[9]

****************************************************************************************************

Neurolinguistics
Neurolinguistics is the study of the neural mechanisms in the human brain that control the comprehension, production, and
acquisition of language. As an interdisciplinary field, neurolinguistics draws methodology and theory from fields such as
neuroscience, linguistics, cognitive science, neurobiology, communication disorders, neuropsychology, and computer
science. Researchers are drawn to the field from a variety of backgrounds, bringing along a variety of experimental
techniques as well as widely varying theoretical perspectives. Much work in neurolinguistics is informed by models in
psycholinguistics and theoretical linguistics, and is focused on investigating how the brain can implement the processes that
theoretical and psycholinguistics propose are necessary in producing and comprehending language. Neurolinguists study the
physiological mechanisms by which the brain processes information related to language, and evaluate linguistic and
psycholinguistic theories, using aphasiology, brain imaging, electrophysiology, and computer modeling.

History

Neurolinguistics is historically rooted in the development in the 19th century of aphasiology, the study of linguistic deficits
(aphasias) occurring as the result of brain damage.[1] Aphasiology attempts to correlate structure to function by analyzing the
effect of brain injuries on language processing.[2] One of the first people to draw a connection between a particular brain area
and language processing was Paul Broca,[1] a French surgeon who conducted autopsies on numerous individuals who had
speaking deficiencies, and found that most of them had brain damage (or lesions) on the left frontal lobe, in an area now
known as Broca's area. Phrenologists had made the claim in the early 19th century that different brain regions carried out
different functions and that language was mostly controlled by the frontal regions of the brain, but Broca's research was
possibly the first to offer empirical evidence for such a relationship,[3][4] and has been described as "epoch-making"[5] and
"pivotal"[3] to the fields of neurolinguistics and cognitive science. Later, Carl Wernicke, after whom Wernicke's area is
named, proposed that different areas of the brain were specialized for different linguistic tasks, with Broca's area handling the
motor production of speech, and Wernicke's area handling auditory speech comprehension. [1][2] The work of Broca and
Wernicke established the field of aphasiology and the idea that language can be studied through examining physical
characteristics of the brain.[4] Early work in aphasiology also benefited from the early twentieth-century work of Korbinian
Brodmann, who "mapped" the surface of the brain, dividing it up into numbered areas based on each area's cytoarchitecture
(cell structure) and function;[6] these areas, known as Brodmann areas, are still widely used in neuroscience today.[7]

The coining of the term "neurolinguistics" has been attributed to Harry Whitaker, who founded the Journal of
Neurolinguistics in 1985.[8][9]

Although aphasiology is the historical core of neurolinguistics, in recent years the field has broadened considerably, thanks in
part to the emergence of new brain imaging technologies (such as PET and fMRI) and time-sensitive electrophysiological
techniques (EEG and MEG), which can highlight patterns of brain activation as people engage in various language tasks; [1][10]
[11]
electrophysiological techniques, in particular, emerged as a viable method for the study of language in 1980 with the
discovery of the N400, a brain response shown to be sensitive to semantic issues in language comprehension.[12][13] The N400
was the first language-relevant brain response to be identified, and since its discovery EEG and MEG have become
increasingly widely used for conducting language research.[14]

Neurolinguistics as a discipline

7
Interaction with other fields

Neurolinguistics is closely related to the field of psycholinguistics, which seeks to elucidate the cognitive mechanisms of
language by employing the traditional techniques of experimental psychology; today, psycholinguistic and neurolinguistic
theories often inform one another, and there is much collaboration between the two fields. [13][15]

Much work in neurolinguistics involves testing and evaluating theories put forth by psycholinguists and theoretical linguists.
In general, theoretical linguists propose models to explain the structure of language and how language information is
organized, psycholinguists propose models and algorithms to explain how language information is processed in the mind, and
neurolinguists analyze brain activity to infer how biological structures (such as neurons) carry out those psycholinguistic
processing algorithms.[16] For example, experiments in sentence processing have used the ELAN, N400, and P600 brain
responses to examine how physiological brain responses reflect the different predictions of sentence processing models put
forth by psycholinguists, such as Janet Fodor and Lyn Frazier's "serial" model,[17] and Theo Vosse and Gerard Kempen's
"Unification model."[15] Neurolinguists can also make new predictions about the structure and organization of language based
on insights about the physiology of the brain, by "generalizing from the knowledge of neurological structures to language
structure."[18]

Neurolinguistics research is carried out in all the major areas of linguistics; the main linguistic subfields,
and how neurolinguistics addresses them, are given in the table below.

Subfield Description Research questions in neurolinguistics

how the brain extracts speech sounds from an acoustic signal, how
Phonetics the study of speech sounds
the brain separates speech sounds from background noise

the study of how sounds are how the phonological system of a particular language is represented
Phonology
organized in a language in the brain

the study of how words are


Morphology and
structured and stored in the mental how the brain accesses words that a person knows
lexicology
lexicon

the study of how multiple-word


Syntax
utterances are constructed
how the brain combines words into constituents and sentences; how
structural and semantic information is used in understanding
sentences
the study of how meaning is
Semantics
encoded in language

Topics studied

Neurolinguistics research investigates several topics, including where language information is processed, how language
processing unfolds over time, how brain structures are related to language acquisition and learning, and how neurophysiology
can contribute to speech and language pathology.

Localizations of language processes

Much work in linguistics has, like Broca's and Wernicke's early studies, investigated the locations of specific language
"modules" within the brain. Research questions include what course language information follows through the brain as it is
processed,[19] whether or not particular areas specialize in processing particular sorts of information, [20] how different brain
regions interact with one another in language processing,[21] and how the locations of brain activation differs when a subject is
producing or perceiving a language other than his or her first language.[22][23][24]

8
Time course of language processes

Another area of neurolinguistics literature involves the use of electrophysiological techniques to analyze the rapid processing
of language in time.[1] The temporal ordering of specific peaks in brain activity may reflect discrete computational processes
that the brain undergoes during language processing; for example, one neurolinguistic theory of sentence parsing proposes
that three brain responses (the ELAN, N400, and P600) are products of three different steps in syntactic and semantic
processing.[25]

Language acquisition

Another topic is the relationship between brain structures and language acquisition.[26] Research in first language acquisition
has already established that infants from all linguistic environments go through similar and predictable stages (such as
babbling), and some neurolinguistics research attempts to find correlations between stages of language development and
stages of brain development,[27] while other research investigates the physical changes (known as neuroplasticity) that the
brain undergoes during second language acquisition, when adults learn a new language.[28]

Language pathology

Neurolinguistic techniques are also used to study disorders and breakdowns in language—such as aphasia and dyslexia—and
how they relate to physical characteristics of the brain.[23][27]

Brain imaging

Since one of the focuses of this field is the testing of linguistic and psycholinguistic models, the technology used for
experiments is highly relevant to the study of neurolinguistics. Modern brain imaging techniques have contributed greatly to
a growing understanding of the anatomical organization of linguistic functions.[1][23] Brain imaging methods used in
neurolinguistics may be classified into hemodynamic methods, electrophysiological methods, and methods that stimulate the
cortex directly.

Hemodynamic

Hemodynamic techniques take advantage of the fact that when an area of the brain works at a task, blood is sent to supply
that area with oxygen (in what is known as the Blood Oxygen Level-Dependent, or BOLD, response). [29] Such techniques
include PET and fMRI. These techniques provide high spatial resolution, allowing researchers to pinpoint the location of
activity within the brain;[1] temporal resolution (or information about the timing of brain activity), on the other hand, is poor,
since the BOLD response happens much more slowly than language processing.[11][30] In addition to demonstrating which
parts of the brain may subserve specific language tasks or computations,[20][25] hemodynamic methods have also been used to
demonstrate how the structure of the brain's language architecture and the distribution of language-related activation may
change over time, as a function of linguistic exposure.[22][28]

In addition to PET and fMRI, which show which areas of the brain are activated by certain tasks, researchers also use
diffusion tensor imaging (DTI), which shows the neural pathways that connect different brain areas, [31] thus providing insight
into how different areas interact.

Electrophysiological

Electrophysiological techniques take advantage of the fact that when a group of neurons in the brain fire together, they create
an electric dipole or current. The technique of EEG measures this electrical current using sensors on the scalp, while MEG
measures the magnetic fields that are generated by these currents. [32] These techniques are able to measure brain activity from
one millisecond to the next, providing excellent temporal resolution, which is important in studying processes that take place
as quickly as language comprehension and production.[32] On the other hand, the location of brain activity can be difficult to
identify in EEG;[30][33] consequently, this technique is used primarily to how language processes are carried out, rather than
where. Research using EEG and MEG generally focuses on event-related potentials (ERPs),[30] which are distinct brain
responses (generally realized as negative or positive peaks on a graph of neural activity) elicited in response to a particular
stimulus. Studies using ERP may focus on each ERP's latency (how long after the stimulus the ERP begins or peaks),
amplitude (how high or low the peak is), or topography (where on the scalp the ERP response is picked up by sensors).[34]
Some important and common ERP components include the N400 (a negativity occurring at a latency of about 400
milliseconds),[30] the mismatch negativity,[35] the early left anterior negativity (a negativity occurring at an early latency and a
front-left topography),[36] the P600,[14][37] and the lateralized readiness potential.[38]

Experimental design

9
Experimental techniques

Neurolinguists employ a variety of experimental techniques in order to use brain imaging to draw conclusions about how
language is represented and processed in the brain. These techniques include the mismatch design, violation-based studies,
various forms of priming, and direct stimulation of the brain.

Mismatch paradigm

The mismatch negativity (MMN) is a rigorously documented ERP component frequently used in neurolinguistic experiments.
[35][39]
It is an electrophysiological response that occurs in the brain when a subject hears a "deviant" stimulus in a set of
perceptually identical "standards" (as in the sequence s s s s s s s d d s s s s s s d s s s s s d).[40][41] Since the MMN is elicited
only in response to a rare "oddball" stimulus in a set of other stimuli that are perceived to be the same, it has been used to test
how speakers perceive sounds and organize stimuli categorically.[42][43] For example, a landmark study by Colin Phillips and
colleagues used the mismatch negativity as evidence that subjects, when presented with a series of speech sounds with
acoustic parameters, perceived all the sounds as either /t/ or /d/ in spite of the acoustic variability, suggesting that the human
brain has representations of abstract phonemes—in other words, the subjects were "hearing" not the specific acoustic
features, but only the abstract phonemes.[40] In addition, the mismatch negativity has been used to study syntactic processing
and the recognition of word category.[35][39][44]

Violation-based

Many studies in neurolinguistics take advantage of anomalies or violations of syntactic or semantic rules in experimental
stimuli, and analyzing the brain responses elicited when a subject encounters these violations. For example, sentences
beginning with phrases such as *the garden was on the worked,[45] which violates an English phrase structure rule, often elicit
a brain response called the early left anterior negativity (ELAN).[36] Violation techniques have been in use since at least 1980,
[36]
when Kutas and Hillyard first reported ERP evidence that semantic violations elicited an N400 effect.[46] Using similar
methods, in 1992, Lee Osterhout first reported the P600 response to syntactic anomalies.[47] Violation designs have also been
used for hemodynamic studies (fMRI and PET): Embick and colleagues, for example, used grammatical and spelling
violations to investigate the location of syntactic processing in the brain using fMRI. [20] Another common use of violation
designs is to combine two kinds of violations in the same sentence and thus make predictions about how different language
processes interact with one another; this type of crossing-violation study has been used extensively to investigate how
syntactic and semantic processes interact while people read or hear sentences.[48][49]

Priming

In psycholinguistics and neurolinguistics, priming refers to the phenomenon whereby a subject can recognize a word more
quickly if he or she has recently been presented with a word that is similar in meaning [50] or morphological makeup (i.e.,
composed of similar parts).[51] If a subject is presented with a "target" word such as doctor and then a "prime" word such as
nurse, if the subject has a faster-than-usual response time to nurse then the experimenter may assume that word nurse in the
brain had already been accessed when the word doctor was accessed.[52] Priming is used to investigate a wide variety of
questions about how words are stored and retrieved in the brain[51][53] and how structurally complex sentences are processed.
[54]

Stimulation

Transcranial magnetic stimulation (TMS), a new noninvasive[55] technique for studying brain activity, uses powerful magnetic
fields that are applied to the brain from outside the head.[56] It is a method of exciting or interrupting brain activity in a
specific and controlled location, and thus is able to imitate aphasic symptoms while giving the researcher more control over
exactly which parts of the brain will be examined.[56] As such, it is a less invasive alternative to direct cortical stimulation,
which can be used for similar types of research but requires that the subject's scalp be removed, and is thus only used on
individuals who are already undergoing a major brain operation (such as individuals undergoing surgery for epilepsy).[57] The
logic behind TMS and direct cortical stimulation is similar to the logic behind aphasiology: if a particular language function
is impaired when a specific region of the brain is knocked out, then that region must be somehow implicated in that language
function. Few neurolinguistic studies to date have used TMS;[1] direct cortical stimulation and cortical recording (recording
brain activity using electrodes placed directly on the brain) have been used with macaque monkeys to make predictions about
the behavior of human brains.[58]

Subject tasks

In most neurolinguistics experiments, subjects do not simply sit and listen to or watch stimuli, but also are instructed to
perform some sort of task in response to the stimuli.[59] Subjects perform these tasks while recordings (electrophysiological or
hemodynamic) are being taken, usually in order to ensure that they are paying attention to the stimuli. [60] At least one study
has suggested that the task the subject does has an effect on the brain responses and the results of the experiment. [61]
10
Lexical decision

The lexical decision task involves subjects seeing or hearing an isolated word and answering whether or not it is a real word.
It is frequently used in priming studies, since subjects are known to make a lexical decision more quickly if a word has been
primed by a related word (as in "doctor" priming "nurse").[50][51][52]

Grammaticality judgment, acceptability judgment

Many studies, especially violation-based studies, have subjects make a decision about the "acceptability" (usually
grammatical acceptability or semantic acceptability) of stimuli.[61][62][63][64][65] Such a task is often used to "ensure that subjects
[are] reading the sentences attentively and that they [distinguish] acceptable from unacceptable sentences in the way [the
experimenter] expect[s] them to do."[63]

Experimental evidence has shown that the instructions given to subjects in an acceptability judgment task can influence the
subjects' brain responses to stimuli. One experiment showed that when subjects were instructed to judge the "acceptability" of
sentences they did not show an N400 brain response (a response commonly associated with semantic processing), but that
they did show that response when instructed to ignore grammatical acceptability and only judge whether or not the sentences
"made sense."[61]

Probe verification

Some studies use a "probe verification" task rather than an overt acceptability judgment; in this paradigm, each experimental
sentence is followed by a "probe word," and subjects must answer whether or not the probe word had appeared in the
sentence.[52][63] This task, like the acceptability judgment task, ensures that subjects are reading or listening attentively, but
may avoid some of the additional processing demands of acceptability judgments, and may be used no matter what type of
violation is being presented in the study.[52]

Truth-value judgment

Subjects may be instructed not to judge whether or not the sentence is grammatically acceptable or logical, but whether the
proposition expressed by the sentence is true or false. This task is commonly used in psycholinguistic studies of child
language.[66][67]

Active distraction and double-task

Some experiments give subjects a "distractor" task to ensure that subjects are not consciously paying attention to the
experimental stimuli; this may be done to test whether a certain computation in the brain is carried out automatically,
regardless of whether the subject devotes attentional resources to it. For example, one study had subjects listen to non-
linguistic tones (long beeps and buzzes) in one ear and speech in the other ear, and instructed subjects to press a button when
they perceived a change in the tone; this supposedly caused subjects not to pay explicit attention to grammatical violations in
the speech stimuli. The subjects showed a mismatch response (MMN) anyway, suggesting that the processing of the
grammatical errors was happening automatically, regardless of attention [35]—or at least that subjects were unable to
consciously separate their attention from the speech stimuli.

Another related form of experiment is the double-task experiment, in which a subject must perform an extra task (such as
sequential finger-tapping or articulating nonsense syllables) while responding to linguistic stimuli; this kind of experiment
has been used to investigate the use of working memory in language processing.[68]

****************************************************************************************************

Psycholinguistics
Psycholinguistics or psychology of language is the study of the psychological and neurobiological factors that enable
humans to acquire, use, comprehend and produce language. Initial forays into psycholinguistics were largely philosophical
ventures, due mainly to a lack of cohesive data on how the human brain functioned. Modern research makes use of biology,
neuroscience, cognitive science, and information theory to study how the brain processes language. There are a number of
subdisciplines; for example, as non-invasive techniques for studying the neurological workings of the brain become more and
more widespread, neurolinguistics has become a field in its own right.

11
Psycholinguistics covers the cognitive processes that make it possible to generate a grammatical and meaningful sentence out
of vocabulary and grammatical structures, as well as the processes that make it possible to understand utterances, words, text,
etc. Developmental psycholinguistics studies children's ability to learn language.

Areas of study

Psycholinguistics is interdisciplinary in nature and is studied by people in a variety of fields, such as psychology, cognitive
science, and linguistics. There are several subdivisions within psycholinguistics that are based on the components that make
up human language.

Linguistic-related areas:

 Phonetics and phonology are concerned with the study of speech sounds. Within psycholinguistics, research focuses
on how the brain processes and understands these sounds.
 Morphology is the study of word structures, especially the relationships between related words (such as dog and
dogs) and the formation of words based on rules (such as plural formation).
 Syntax is the study of the patterns which dictate how words are combined together to form sentences.
 Semantics deals with the meaning of words and sentences. Where syntax is concerned with the formal structure of
sentences, semantics deals with the actual meaning of sentences.
 Pragmatics is concerned with the role of context in the interpretation of meaning.

Psychology-related areas:

 The study of word recognition and reading examines the processes involved in the extraction of orthographic,
morphological, phonological, and semantic information from patterns in printed text.
 Developmental psycholinguistics studies infants' and children's ability to learn language, usually with experimental
or at least quantitative methods (as opposed to naturalistic observations such as those made by Jean Piaget in his
research on the development of children).

Theories

Theories about how language works in the human mind attempt to account for, among other things, how we associate
meaning with the sounds (or signs) of language and how we use syntax—that is, how we manage to put words in the proper
order to produce and understand the strings of words we call "sentences." The first of these items—associating sound with
meaning—is the least controversial and is generally held to be an area in which animal and human communication have at
least some things in common (See animal communication). Syntax, on the other hand, is controversial, and is the focus of the
discussion that follows.

There are essentially two schools of thought as to how we manage to create syntactic sentences: (1) syntax is an evolutionary
product of increased human intelligence over time and social factors that encouraged the development of spoken language;
(2) language exists because humans possess an innate ability, an access to what has been called a "universal grammar." This
view holds that the human ability for syntax is "hard-wired" in the brain. This view claims, for example, that complex
syntactic features such as recursion are beyond even the potential abilities of the most intelligent and social non-humans.
(Recursion, for example, includes the use of relative pronouns to refer back to earlier parts of a sentence—"The girl whose
car is blocking my view of the tree that I planted last year is my friend.") The innate view claims that the ability to use syntax
like that would not exist without an innate concept that contains the underpinnings for the grammatical rules that produce
recursion. Children acquiring a language, thus, have a vast search space to explore among possible human grammars, settling,
logically, on the language(s) spoken or signed in their own community of speakers. Such syntax is, according to the second
point of view, what defines human language and makes it different from even the most sophisticated forms of animal
communication.

The first view was prevalent until about 1960 and is well represented by the mentalistic theories of Jean Piaget and the
empiricist Rudolf Carnap. As well, the school of psychology known as behaviorism (see Verbal Behavior (1957) by B.F.
Skinner) puts forth the point of view that language is behavior shaped by conditioned response. The second point of view (the
"innate" one) can fairly be said to have begun with Noam Chomsky's highly critical review of Skinner's book in 1959 in the
pages of the journal Language.[1] That review started what has been termed "the cognitive revolution" in psychology.

The field of psycholinguistics since then has been defined by reactions to Chomsky, pro and con. The pro view still holds that
the human ability to use syntax is qualitatively different from any sort of animal communication. That ability might have
resulted from a favorable mutation (extremely unlikely) or (more likely) from an adaptation of skills evolved for other
purposes. That is, precise syntax might, indeed, serve group needs; better linguistic expression might produce more cohesion,
cooperation, and potential for survival, BUT precise syntax can only have developed from rudimentary—or no—syntax,

12
which would have had no survival value and, thus, would not have evolved at all. Thus, one looks for other skills, the
characteristics of which might have later been useful for syntax. In the terminology of modern evolutionary biology, these
skills would be said to be "pre-adapted" for syntax (see also exaptation). Just what those skills might have been is the focus of
recent research—or, at least, speculation.

The con view still holds that language—including syntax—is an outgrowth of hundreds of thousands of years of increasing
intelligence and tens of thousands of years of human interaction. From that view, syntax in language gradually increased
group cohesion and potential for survival. Language—syntax and all—is a cultural artifact. This view challenges the "innate"
view as scientifically unfalsifiable; that is to say, it can't be tested; the fact that a particular, conceivable syntactic structure
does not exist in any of the world's finite repertoire of languages is an interesting observation, but it is not proof of a genetic
constraint on possible forms, nor does it prove that such forms couldn't exist or couldn't be learned.

Contemporary theorists, besides Chomsky, working in the field of theories of psycholinguistics include George Lakoff and
Steven Pinker.

Methodologies

Much methodology in psycholinguistics takes the form of behavioral experiments incorporating a lexical decision task. In
these types of studies, subjects are presented with some form of linguistic input and asked to perform a task (e.g. make a
judgment, reproduce the stimulus, read a visually presented word aloud). Reaction times (usually on the order of
milliseconds) and proportion of correct responses are the most often employed measures of performance. Such experiments
often take advantage of priming effects, whereby a "priming" word or phrase appearing in the experiment can speed up the
lexical decision for a related "target" word later.[2]

Such tasks might include, for example, asking the subject to convert nouns into verbs; e.g., "book" suggests "to write,"
"water" suggests "to drink," and so on. Another experiment might present an active sentence such as "Bob threw the ball to
Bill" and a passive equivalent, "The ball was thrown to Bill by Bob" and then ask the question, "Who threw the ball?" We
might then conclude (as is the case) that active sentences are processed more easily (faster) than passive sentences. More
interestingly, we might also find out (as is the case) that some people are unable to understand passive sentences; we might
then make some tentative steps towards understanding certain types of language deficits (generally grouped under the broad
term, aphasia).[3]

Until the recent advent of non-invasive medical techniques, brain surgery was the preferred way for language researchers to
discover how language works in the brain. For example, severing the corpus callosum (the bundle of nerves that connects the
two hemispheres of the brain) was at one time a treatment for some forms of epilepsy. Researchers could then study the ways
in which the comprehension and production of language were affected by such drastic surgery. Where an illness made brain
surgery necessary, language researchers had an opportunity to pursue their research.

Newer, non-invasive techniques now include brain imaging by positron emission tomography (PET); functional magnetic
resonance imaging (fMRI); event-related potentials (ERPs) in electroencephalography (EEG) and magnetoencephalography
(MEG); and transcranial magnetic stimulation (TMS). Brain imaging techniques vary in their spatial and temporal resolutions
(fMRI has a resolution of a few thousand neurons per pixel, and ERP has millisecond accuracy). Each type of methodology
presents a set of advantages and disadvantages for studying a particular problem in psycholinguistics.

Computational modeling - e.g. the DRC model of reading and word recognition proposed by Coltheart and colleagues [4] - is
another methodology. It refers to the practice of setting up cognitive models in the form of executable computer programs.
Such programs are useful because they require theorists to be explicit in their hypotheses and because they can be used to
generate accurate predictions for theoretical models that are so complex that they render discursive analysis unreliable. One
example of computational modeling is McClelland and Elman's TRACE model of speech perception.[5]

More recently, eye tracking has been used to study online language processing. Beginning with Rayner (1978)[6] the
importance and informativity of eye-movements during reading was established. Tanenhaus et al., [7] have performed a
number of visual-world eye-tracking studies to study the cognitive processes related to spoken language. Since eye
movements are closely linked to the current focus of attention, language processing can be studied by monitoring eye
movements while a subject is presented with linguistic input.

Issues and areas of research

13
Psycholinguistics is concerned with the nature of the computations and processes that the brain undergoes to comprehend and
produce language. For example, the cohort model seeks to describe how words are retrieved from the mental lexicon when an
individual hears or sees linguistic input.[8][2]

Recent research using new non-invasive imaging techniques seeks to shed light on just where certain language processes
occur in the brain.

There are a number of unanswered questions in psycholinguistics, such as whether the human ability to use syntax is based
on innate mental structures or emerges from interaction with other humans, and whether some animals can be taught the
syntax of human language.

Two other major subfields of psycholinguistics investigate first language acquisition, the process by which infants acquire
language, and second language acquisition. In addition, it is much more difficult for adults to acquire second languages than
it is for infants to learn their first language (bilingual infants are able to learn both of their native languages easily). Thus,
sensitive periods may exist during which language can be learned readily.[9] A great deal of research in psycholinguistics
focuses on how this ability develops and diminishes over time. It also seems to be the case that the more languages one
knows, the easier it is to learn more.[10]

The field of aphasiology deals with language deficits that arise because of brain damage. Studies in aphasiology can both
offer advances in therapy for individuals suffering from aphasia, and further insight into how the brain processes language.

****************************************************************************************************

Language education
Language education includes the teaching and learning of a language. It can include improving a learner's native language;
however, it is more commonly used with regard to second language acquisition, that is, the learning of a foreign or second
language, and that is the meaning that is treated in this article. As such, language education is a branch of applied linguistics.

History of foreign language education

Ancient to medieval period

Although the need to learn foreign languages is almost as old as human history itself, the origins of modern language
education has its roots in the study and teaching of Latin. 500 years ago Latin was the dominant language of education,
commerce, religion and government in much of the Western world.

However, by the end of the 16th century, French, Italian and English displaced Latin as the languages of spoken and written
communication. John Amos Comenius tried to reverse this trend, by composing a complete course for learning Latin,
covering the entire school curriculum, culminating in his Opera Didactica Omnia, 1657. In this work, Comenius also outlined
his theory of language acquisition. He is one of the first theorists to write systematically about how languages are learned,
and about pedagogical methodology for language acquisition. He held that language acquisition must be allied with sensation
and experience. Teaching must be oral. The schoolroom should have models of things, and failing that, pictures of them. His
theory lead to him publishing the world's first illustrated children's book, the Orbis Sensualim Pictus. The study of Latin
diminished from the study of a living language to be used in the real world to a subject in the school curriculum. Such decline
brought about a new justification for its study. It was then claimed that its study developed intellectual abilities and the study
of Latin grammar became an end in and of itself.

"Grammar schools" from the 16th to 18th centuries focused on teaching the grammatical aspects of Classical Latin.
Advanced students continued grammar study with the addition of rhetoric.[1]

18th century

The study of modern languages did not become part of the curriculum of European schools until the 18th century. Based on
the purely academic study of Latin, students of modern languages did much of the same exercises, studying grammatical
rules and translating abstract sentences. Oral work was minimal; instead students were required to memorise grammatical
rules and apply these to decode written texts in the target language. This tradition-inspired method became known as the
'Grammar-Translation Method'.[1]

14
19th-20th century

Innovation in foreign language teaching began in the 19th century and, very rapidly, in the 20th century, leading to a number
of different methodologies, sometimes conflicting, each trying to be a major improvement over the last or other
contemporary methods. The earliest applied linguists, such as Jean Manesca, Heinrich Gottfried Ollendorff (1803-1865),
Henry Sweet (1845-1912), Otto Jespersen (1860-1943) and Harold Palmer (1877-1949) worked on setting principles and
approaches based on linguistic and psychological theories, although they left many of the specific practical details for others
to devise.[1] Unfortunately, those looking at the history of foreign language education in the 20th century and the methods of
teaching (such as those related below) might be tempted to think that it is a history of failure. Very few who study foreign
languages in U.S. universities as a major manage to reach something called "minimum professional proficiency" and even
"reading knowledge" required for PhD degree is comparable only to what second year language students read. In addition,
very few American researchers can read and assess information written in languages other than English and even a number of
famous linguists are monolingual.[2]

However, anecdotal evidence for successful second or foreign language learning is easy to find, leading to a discrepancy
between these cases and the failure of most language programs to help make second language acquisition research
emotionally-charged. Older methods and approaches such as the grammar translation method or the direct method are
disposed of and even ridiculed as newer methods and approaches are invented and promoted as the only and complete
solution to the problem of the high failure rates of foreign language students. Most books on language teaching list the
various methods that have been used in the past, often ending with the author's new method. These new methods seem to be
created full-blown from the authors' minds, as they generally give no credence to what was done before and how it relates to
the new method. For example, descriptive linguists seem to claim unhesitatingly that before their work, which lead to the
audio-lingual method developed for the U.S. Army in World War II, there were no scientifically-based language teaching
methods. However, there is significant evidence to the contrary. It is also often inferred or even stated that older methods
were completely ineffective or have died out completely when even the oldest methods are still used (e.g. the Berlitz version
of the direct method). Much of the reason for this is that proponents of new methods have been so sure that their ideas are so
new and so correct that they could not conceive that the older ones have enough validity to cause controversy and emphasis
on new scientific advances has tended to blind researchers to precedents in older work.(p5) [2]

The development of foreign language teaching is not linear. There have been two major branches in the field, empirical and
theoretical, which have almost completely-separate histories, with each gaining ground over the other at one point in time or
another. Examples of researchers on the empiricist side are Jesperson, Palmer, Leonard Bloomfield who promote mimicry
and memorization with pattern drills. These methods follow from the basic empiricist position that language acquisition
basically results from habits formed by conditioning and drilling. In its most extreme form, language learning is basically the
same as any other learning in any other species, human language being essentially the same as communication behaviors seen
in other species. On the other, are Francois Gouin, M.D. Berlitz, Elime de Sauzé, whose rationalist theories of language
acquisition dovetail with linguistic work done by Noam Chomsky and others. These have led to a wider variety of teaching
methods from grammar-translation, to Gouin's "series method" or the direct methods of Berlitz and de Sauzé. With these
methods, students generate original and meaningful sentences to gain a functional knowledge of the rules of grammar. This
follows from the rationalist position that man is born to think and language use is a uniquely human trait impossible in other
species. Given that human languages share many common traits, the idea is that humans share a universal grammar which is
built into our brain structure. This allows us to create sentences that we have never heard before, but can still be immediately
understood by anyone who understands the specific language being spoken. The rivalry of the two camps is intense, with
little communication or cooperation between them.[2]

Methods of teaching foreign languages

Language education may take place as a general school subject or in a specialized language school. There are many methods
of teaching languages. Some have fallen into relative obscurity and others are widely used; still others have a small
following, but offer useful insights.

While sometimes confused, the terms "approach", "method" and "technique" are hierarchical concepts. An approach is a set
of correlative assumptions about the nature of language and language learning, but does not involve procedure or provide any
details about how such assumptions should translate into the classroom setting. Such can be related to second language
acquisition theory.

There are three principal views at this level:

1. The structural view treats language as a system of structurally related elements to code meaning (e.g. grammar).
2. The functional view sees language as a vehicle to express or accomplish a certain function, such as requesting
something.

15
3. The interactive view sees language as a vehicle for the creation and maintenance of social relations, focusing on
patterns of moves, acts, negotiation and interaction found in conversational exchanges. This view has been fairly
dominant since the 1980s.[1]

A method is a plan for presenting the language material to be learned and should be based upon a selected approach. In order
for an approach to be translated into a method, an instructional system must be designed considering the objectives of the
teaching/learning, how the content is to be selected and organized, the types of tasks to be performed, the roles of students
and the roles of teachers. A technique is a very specific, concrete stratagem or trick designed to accomplish an immediate
objective. Such are derived from the controlling method, and less-directly, with the approach. [1]

The grammar translation method

The grammar translation method instructs students in grammar, and provides vocabulary with direct translations to
memorize. It was the predominant method in Europe in the 19th century. Most instructors now acknowledge that this method
is ineffective by itself[citation needed]. It is now most commonly used in the traditional instruction of the classical languages.

At school, the teaching of grammar consists of a process of training in the rules of a language which must make it possible to
all the students to correctly express their opinion, to understand the remarks which are addressed to them and to analyze the
texts which they read. The objective is that by the time they leave college, the pupil controls the tools of the language which
are the vocabulary, grammar and the orthography, to be able to read, understand and write texts in various contexts. The
teaching of grammar examines the texts, and develops awareness that language constitutes a system which can be analyzed.
This knowledge is acquired gradually, by traversing the facts of language and the syntactic mechanisms, going from simplest
to the most complex. The exercises according to the program of the course must untiringly be practiced to allow the
assimilation of the rules stated in the course.[citation needed] That supposes that the teacher corrects the exercises. The pupil can
follow his progress in practicing the language by comparing his results. Thus can he adapt the grammatical rules and control
little by little the internal logic of the syntactic system. The grammatical analysis of sentences constitutes the objective of the
teaching of grammar at the school. Its practice makes it possible to recognize a text as a coherent whole and conditions the
training of a foreign language. Grammatical terminology serves this objective. Grammar makes it possible for each one to
understand how the mother tongue functions, in order to give him the capacity to communicate its thought.

The direct method

The direct method, sometimes also called natural method, is a method that refrains from using the learners' native language
and just uses the target language. It was established in Germany and France around 1900 and are best represented by the
methods devised by Berlitz and de Sauzé although neither claim originality and has been re-invented under other names. [2]
The direct method operates on the idea that second language learning must be an imitation of first language learning, as this
is the natural way humans learn any language - a child never relies on another language to learn its first language, and thus
the mother tongue is not necessary to learn a foreign language. This method places great stress on correct pronunciation and
the target language from outset. It advocates teaching of oral skills at the expense of every traditional aim of language
teaching. Such methods rely on directly representing an experience into a linguistic construct rather than relying on
abstractions like mimicry, translation and memorizing grammar rules and vocabulary. [2]

According to this method, printed language and text must be kept away from second language learner for as long as possible,
just as a first language learner does not use printed word until he has good grasp of speech. Learning of writing and spelling
should be delayed until after the printed word has been introduced, and grammar and translation should also be avoided
because this would involve the application of the learner's first language. All above items must be avoided because they
hinder the acquisition of a good oral proficiency.

The method relies on a step-by-step progression based on question-and-answer sessions which begin with naming common
objects such as doors, pencils, floors, etc. It provides a motivating start as the learner begins using a foreign language almost
immediately. Lessons progress to verb forms and other grammatical structures with the goal of learning about thirty new
words per lesson.[2]

The series method

In the 19th century, Francois Gouin went to Hamburg to learn German. Based on his experience as a Latin teacher, he
thought the best way to do this would be memorize a German grammar book and a table of its 248 irregular verbs. However,
when he went to the academy to test his new language skills, he was disappointed to find out that he could not understand
anything. Trying again, he similarly memorized the 800 root words of the language as well as re-memorizing the grammar
and verb forms. However, the results were the same. During this time, he had isolated himself from people around him, so he
tried to learn by listening, imitating and conversing with the Germans around him, but found that his carefully-constructed
sentences often caused native German speakers to laugh. Again he tried a more classical approach, translation, and even
memorizing the entire dictionary but had no better luck.[2]
16
When he returned home, he found that his three-year-old nephew had learned to speak French. He noticed the boy was very
curious and upon his first visit to a mill, he wanted to see everything and be told the name of everything. After digesting the
experience silently, he then reenacted his experiences in play, talking about what he learned to whoever would listen or to
himself. Gouin decided that language learning was a matter of transforming perceptions into conceptions, using language to
represent what one experiences. Language is not an arbitrary set of conventions but a way of thinking and representing the
world to oneself. It is not a conditioning process, but one in which the learner actively organizes his perceptions into
linguistics concepts.[2]

Variation of direct method

The series method is a variety of the direct method (above) in that experiences are directly connected to the target language.
Gouin felt that such direct "translation" of experience into words, makes for a "living language". (p59) Gouin also noticed
that children organize concepts in succession of time, relating a sequence of concepts in the same order. Gouin's method is
based on arranging concepts in series. Gouin suggested that students learn a language more quickly and retain it better if it is
presented through a chronological sequence of events. Students learn sentences based on an action such as leaving a house in
the order in which such would be performed. Gouin found that if the series of sentences are shuffled, their memorization
becomes nearly impossible. For this, Gouin preceded psycholinguistic theory of the 20th century. He found that people will
memorize events in a logical sequence, even if they are not presented in that order. He also discovered a second insight into
memory called "incubation". Linguistic concepts take time to settle in the memory. The learner must use the new concepts
frequently after presentation, either by thinking or by speaking, in order to master them. His last crucial observation was that
language was learned in sentences with the verb as the most crucial component. Gouin would write a series in two columns:
one with the complete sentences and the other with only the verb. With only the verb elements visible, he would have
students recite the sequence of actions in full sentences of no more than twenty-five sentences. Another exercise involved
having the teacher solicit a sequence of sentences by basically ask him/her what s/he would do next. While Gouin believed
that language was rule-governed, he did not believe it should be explicitly taught. [2]

His course was organized on elements of human society and the natural world. He estimated that a language could be learned
with 800 to 900 hours of instruction over a series of 4000 exercises and no homework. The idea was that each of the
exercises would force the student to think about the vocabulary in terms of its relationship with the natural world. While there
is evidence that the method can work extremely well, it has some serious flaws. One of which is the teaching of subjective
language, where the students must make judgments about what is experienced in the world (e.g. "bad" and "good") as such do
not relate easily to one single common experience. However, the real weakness is that the method is entirely based on one
experience of a three-year-old. Gouin did not observe the child's earlier language development such as naming (where only
nouns are learned) or the role that stories have in human language development. What distinguishes the series method from
the direct method is that vocabulary must be learned by translation from the native language, at least in the beginning. [2]

The oral approach/Situational language teaching

This approach was developed from the 1930s to the 1960s by British applied linguists such as Harold Palmer and A.S.
Hornsby. They were familiar with the Direct method as well as the work of 19th century applied linguists such as Otto
Jesperson and Daniel Jones but attempted to develop a scientifically-founded approach to teaching English than was evidence
by the Direct Method.[1]

A number of large-scale investigations about language learning and the increased emphasis on reading skills in the 1920s led
to the notion of "vocabulary control". It was discovered that languages have a core basic vocabulary of about 2,000 words
that occurred frequently in written texts, and it was assumed that mastery of these would greatly aid reading comprehension.
Parallel to this was the notion of "grammar control", emphasizing the sentence patterns most-commonly found in spoken
conversation. Such patterns were incorporated into dictionaries and handbooks for students. The principle difference between
the oral approach and the direct method was that methods devised under this approach would have theoretical principles
guiding the selection of content, gradation of difficulty of exercises and the presentation of such material and exercises. The
main proposed benefit was that such theoretically-based organization of content would result in a less-confusing sequence of
learning events with better contextualization of the vocabulary and grammatical patterns presented. [1] Last but not least, all
language points were to be presented in "situations". Emphasis on this point led to the approach's second name. Such learning
in situ would lead to students' acquiring good habits to be repeated in their corresponding situations. Teaching methods stress
PPP (presentation (introduction of new material in context), practice (a controlled practice phase) and production (activities
designed for less-controlled practice)).[1]

Although this approach is all but unknown among language teachers today, elements of it have had long lasting effects on
language teaching, being the basis of many widely-used English as a Second/Foreign Language textbooks as late as the 1980s
and elements of it still appear in current texts.[1] Many of the structural elements of this approach were called into question in
the 1960s, causing modifications of this method that lead to Communicative language teaching. However, its emphasis on
oral practice, grammar and sentence patterns still finds widespread support among language teachers and remains popular in
countries where foreign language syllabuses are still heavily based on grammar. [1]

17
The audio-lingual method

The audio-lingual method was developed due to the U.S.'s entry into World War II. The government suddenly needed people
who could carry on conversations fluently in a variety of languages such as German, French, Italian, Chinese, Malay, etc.,
and could work as interpreters, code-room assistants, and translators. However, since foreign language instruction in that
country was heavily focused on reading instruction, no textbooks, other materials or courses existed at the time, so new
methods and materials had to be devised. The Army Specialized Training Program created intensive programs based on the
techniques Leonard Bloomfield and other linguists devised for Native American languages, where students interacted
intensively with native speakers and a linguist in guided conversations designed to decode its basic grammar and learn the
vocabulary. This "informant method" had great success with its small class sizes and motivated learners. [1]

The Army Specialized Training Program only lasted a few years, but it gained a lot of attention from the popular press and
the academic community. Charles Fries set up the first English Language Institute at the University of Michigan, to train
English as a second or foreign language teachers. Similar programs were created later at Georgetown University, University
of Texas among others based on the methods and techniques used by the military. The developing method had much in
common with the British oral approach although the two developed independently. The main difference was the developing
audio-lingual methods allegiance to structural linguistics, focusing on grammar and contrastive analysis to find differences
between the student's native language and the target language in order to prepare specific materials to address potential
problems. These materials strongly emphasized drill as a way to avoid or eliminate these problems. [1]

This first version of the method was originally called the oral method, the aural-oral method or the structural approach. The
audio-lingual method truly began to take shape near the end of the 1950s, this time due government pressure resulting from
the space race. Courses and techniques were redesigned to add insights from behaviorist psychology to the structural
linguistics and constructive analysis already being used. Under this method, students listen to or view recordings of language
models acting in situations. Students practice with a variety of drills, and the instructor emphasizes the use of the target
language at all times. The idea is that by reinforcing 'correct' behaviors, students will make them into habits. [1]

Due to weaknesses in performance[3], and more importantly because of Noam Chomsky's theoretical attack on language
learning as a set of habits, audio-lingual methods are rarely the primary method of instruction today. However, elements of
the method still survive in many textbooks.[1]

Communicative language teaching

Communicative language teaching (CLT) is an approach to the teaching of languages that emphasizes interaction as both the
means and the ultimate goal of learning a language. Despite a number of criticisms [4] it continues to be popular, particularly in
Europe, where constructivist views on language learning and education in general dominate academic discourse.

In recent years, Task-based language learning (TBLL), also known as task-based language teaching (TBLT) or task-based
instruction (TBI), has grown steadily in popularity. TBLL is a further refinement of the CLT approach, emphasizing the
successful completion of tasks as both the organizing feature and the basis for assessment of language instruction. Dogme
language teaching shares a philosophy with TBL, although differs in approach.[5] Dogme is a communicative approach to
language teaching and encourages teaching without published textbooks and instead focusing on conversational
communication among the learners and the teacher.[6]

Language immersion

Language immersion puts students in a situation where they must use a foreign language, whether or not they know it. This
creates fluency, but not accuracy of usage. French-language immersion programs are common in Canada in the provincial
school systems, as part of the drive towards bilingualism.

Minimalist/methodist

Paul Rowe's minimalist/methodist approach. This new approach is underpinned with Paul Nation's three actions of successful
ESL teachers.[citation needed] Initially it was written specifically for unqualified, inexperienced people teaching in EFL situations.
However, experienced language teachers are also responding positively to its simplicity. Language items are usually provided
using flashcards. There is a focus on language-in-context and multi-functional practices.

Directed practice

Directed practice has students repeat phrases. This method is used by U.S. diplomatic courses. It can quickly provide a
phrasebook-type knowledge of the language. Within these limits, the student's usage is accurate and precise. However the
student's choice of what to say is not flexible.

18
Learning by teaching (LdL)

Learning by teaching is a widespread method in Germany, developed by Jean-Pol Martin. The students take the teacher's role
and teach their peers.

Proprioceptive language learning method

The Proprioceptive language learning method (commonly called the Feedback training method) emphasizes simultaneous
development of cognitive, motor, neurological, and hearing as all being part of a comprehensive language learning process.
Lesson development is as concerned with the training of the motor and neurological functions of speech as it is with
cognitive (memory) functions. It further emphasizes that training of each part of the speech process must be simultaneous.
The Proprioceptive Method, therefore, emphasizes spoken language training, and is primarily used by those wanting to
perfect their speaking ability in a target language.

The Proprioceptive Method virtually stands alone as a Second Language Acquisition (SLA) method in that it bases its
methodology on a speech pathology model. It stresses that mere knowledge (in the form of vocabulary and grammar
memory) is not the sole requirement for spoken language fluency, but that the mind receives real-time feedback from both
hearing and neurological receptors of the mouth and related organs in order to constantly regulate the store of vocabulary and
grammar memory in the mind during speech.

For optimum effectiveness, it maintains that each of the components of second language acquisition must be encountered
simultaneously. It therefore advocates that all memory functions, all motor functions and their neurological receptors, and all
feedback from both the mouth and ears must occur at exactly the same moment in time of the instruction. Thus, according to
the Proprioceptive Method, all student participation must be done at full speaking volume. Further, in order to train memory,
after initial acquaintance with the sentences being repeated, all verbal language drills must be done as a response to the
narrated sentences which the student must repeat (or answer) entirely apart from reading a text.[7]

Silent Way

The Silent Way[8] is a discovery learning approach, invented by Caleb Gattegno in the 1950s. It is often considered to be one
of the humanistic approaches. It is called The Silent Way because the teacher is usually silent, leaving room for the students
to talk and explore the language.

Pimsleur method

Pimsleur language learning system is based on the research of and model programs developed by American language teacher
Paul Pimsleur. It involves recorded 30-minute lessons to be done daily, with each lesson typically featuring a dialog, revision,
and new material. Students are asked to translate phrases into the target language, and occasionally to respond in the target
language to lines spoken in the target language. The instruction starts in the student's language but gradually changes to the
target language. Several all-audio programs now exist to teach various languages using the Pimsleur Method. The syllabus is
the same in all languages.

Michel Thomas Method

Michel Thomas Method is an audio-based teaching system developed by Michel Thomas, a language teacher in the USA. It
was originally done in person, although since his death it is done via recorded lessons. The instruction is done entirely in the
student's own language, although the student's responses are always expected to be in the target language. The method
focuses on constructing long sentences with correct grammar and building student confidence. There is no listening practice,
and there is no reading or writing. The syllabus is ordered around the easiest and most useful features of the language, and as
such is different for each language.[9]

Other

Several methodologies that emphasise understanding language in order to learn, rather than producing it, exist as varieties of
the comprehension approach. These include Total Physical Response and the natural approach of Stephen Krashen and Tracy
D. Terrell.

There are a lot of language learning software using the multimedia capabilities of computers.

Learning strategies

19
Code switching

Code switching, that is, changing between languages at some point in a sentence or utterance, is a commonly used
communication strategy among language learners and bilinguals. While traditional methods of formal instruction often
discourage code switching, students, especially those placed in a language immersion situation, often use it. If viewed as a
learning strategy, wherein the student uses the target language as much as possible but reverts to their native language for any
element of an utterance that they are unable to produce in the target language, then it has the advantages that it encourages
fluency development and motivation and a sense of accomplishment by enabling the student to discuss topics of interest to
him or her early in the learning process -- before requisite vocabulary has been memorized. It is particularly effective for
students whose native language is English, due to the high probability of a simple English word or short phrase being
understood by the conversational partner.

Blended learning

Blended learning combines face-to-face teaching with distance education, frequently electronic, either computer-based or
web-based. It has been a major growth point in the ELT (English Language Teaching) industry over the last ten years.

Some people, though, use the phrase 'Blended Learning' to refer to learning taking place while the focus is on other activities.
For example, playing a card game that requires calling for cards may allow blended learning of numbers (1 to 10).

Skills teaching

When talking about language skills, the four basic ones are: listening, speaking, reading and writing. However, other, more
socially-based skills have been identified more recently such as summarizing, describing, narrating etc. In addition, more
general learning skills such as study skills and knowing how one learns have been applied to language classrooms. [10]

In the 1970s and 1980s the four basic skills were generally taught in isolation in a very rigid order, such as listening before
speaking. However, since then, it has been recognized that we generally use more than one skill at a time, leading to more
integrated exercises.[10] Speaking is a skill that often is underrepresented in the traditional classroom. This could be due to the
fact that it is considered a less-academic skills than writing, is transient and improvised (thus harder to assess and teach
through rote imitation).

More recent textbooks stress the importance of students working with other students in pairs and groups, sometimes the
entire class. Pair and group work give opportunties for more students to participate more actively. However, supervision of
pairs and groups is important to make sure everyone participates as equally as possible. Such activities also provide
opportunties for peer teaching, where weaker learners can find support from stronger classmates. [10]

Language education by region

Europe

Foreign language education

1995 European Commission’s White Paper "Teaching and learning – Towards the learning society", stated that "upon
completing initial training, everyone should be proficient in two Community foreign languages". The Lisbon Summit of 2000
defined languages as one of the five key skills.

In fact, even in 1974, at least one foreign language was compulsory in all but two European member states (Ireland and the
United Kingdom, apart from Scotland). By 1998 nearly all pupils in Europe studied at least one foreign language as part of
their compulsory education, the only exception being the Republic of Ireland, where primary and secondary schoolchildren
learn both Irish and English, but neither is considered a foreign language although a third European language is also taught.
Pupils in upper secondary education learn at least two foreign languages in Belgium's Flemish community, Denmark,
Netherlands, Germany, Luxembourg, Finland, Sweden, Switzerland, Greece, Cyprus, Estonia, Latvia, Lithuania, Poland,
Romania, Serbia, Slovenia and Slovakia.

On average in Europe, at the start of foreign language teaching, pupils have lessons for three to four hours a week.
Compulsory lessons in a foreign language normally start at the end of primary school or the start of secondary school. In
Luxembourg, Norway, Italy and Malta, however, the first foreign language starts at age six, and in Belgium's Flemish
community at age 10. About half of the EU's primary school pupils learn a foreign language.

English is the language taught most often at lower secondary level in the EU. 93% of children there learn English. At upper
secondary level, English is even more widely taught. French is taught at lower secondary level in all EU countries except
20
Slovenia. A total of 33% of European Union pupils learn French at this level. At upper secondary level the figure drops
slightly to 28%. German is taught in nearly all EU countries. A total of 13% of pupils in the European Union learn German in
lower secondary education, and 20% learn it at an upper secondary level.

Despite the high rate of foreign language teaching in schools, the number of adults claiming to speak a foreign language is
generally lower than might be expected. This is particularly true of native English speakers: in 2004 a British survey showed
that only one in 10 UK workers could speak a foreign language. Less than 5% could count to 20 in a second language, for
example. 80% said they could work abroad anyway, because "everyone speaks English." In 2001, a European Commission
survey found that 65.9% of people in the UK spoke only their native tongue.

Since the 1990s, the Common European Framework of Reference for Languages has tried to standardize the learning of
languages across Europe (one of the first results being UNIcert).

Bilingual education

In some countries, learners have lessons taken entirely in a foreign language: for example, more than half of European
countries with a minority or regional language community use partial immersion to teach both the minority and the state
language.

In the 1960s and 1970s, some central and eastern European countries created a system of bilingual schools for well-
performing pupils. Subjects other than languages were taught in a foreign language. In the 1990s this system was opened to
all pupils in general education, although some countries still make candidates sit an entrance exam. At the same time,
Belgium's French community, France, the Netherlands, Austria and Finland also started bilingual schooling schemes.
Germany meanwhile had established some bilingual schools in the late 1960s.

United States

In most school systems, foreign language is taken in high school, with many schools requiring one to three years of foreign
language in order to graduate. In some school systems, foreign language is also taught during middle school, and recently,
many elementary schools have begun teaching foreign languages as well.

The most popular language is Spanish, due to the large number of recent Spanish-speaking immigrants to the United States
(see Spanish in the United States). Other popular languages are French, German, Italian, and Japanese. Latin used to be more
common, but has fallen from favor somewhat. During the Cold War, the United States government pushed for Russian
education, and some schools still maintain their Russian programs [1]. Other languages recently gaining popularity are
Chinese (especially Mandarin) and Arabic.

Australia

Prior to European colonisation, there were hundreds of Aboriginal languages, taught in a traditional way. The arrival of a
substantial number of Irish in the first English convict ships meant that European Australia was not ever truly monolingual.
When the goldrushes of the 1850s trebled the white population, it brought many more Welsh speakers, who had their own
language newspapers through to the 1870s, but the absence of language education meant that these Celtic languages never
flourished.

Waves of European migration after World War II brought "community languages," sometimes with schools. However, from
1788 until modern times it was generally expected that immigrants would learn English and abandon their first language
(Clyne, 1997). The wave of multicultural policies since the 1970s has softened aspects of these attitudes.

In 1982 a bipartisan committee of Australian parliamentarians was appointed and identified a number of guiding principles
that would support a National Policy on Languages (NPL). Its trend was towards bilingualism in all Australians, for reasons
of fairness, diversity and economics.

In the 1990s the Australian Languages and Literacy Policy (ALLP) was introduced, building on the NPL, with extra attention
being given to the economic motivations of second language learning. A distinction became drawn between priority
languages and community languages. The ten priority languages identified were Mandarin, French, German, Modern Greek,
Indonesian, Japanese, Italian, Korean, Spanish and Aboriginal languages.

However, Australia's federal system meant that the NPL and ALLP direction was really an overall policy from above without
much engagement from the states and territories. The NALSAS strategy united Australian Government policy with that of the
states and territories. It focused on four targeted languages: Mandarin, Indonesian, Japanese and Korean. This would be
integrated into studies of Society and Environment, English and Arts.
21
By 2000, the top ten languages enrolled in the final high school year were, in descending order: Japanese, French, German,
Chinese, Indonesian, Italian, Greek, Vietnamese, Spanish and Arabic. In 2002, only about 10% of Year 12 included at least
one Language Other Than English (LOTE) among their course choices.

Language study holidays

An increasing number of people are now combining holidays with language study in the native country. This enables the
student to experience the target culture by meeting local people. Such a holiday often combines formal lessons, cultural
excursions, leisure activities, and a homestay, perhaps with time to travel in the country afterwards. Language study holidays
are popular across Europe and Asia due to the ease of transportation and variety of nearby countries. These holidays have
become increasingly more popular in South America in such countries as Ecuador and Peru.

With the increasing prevalence of international business transactions, it is now important to have multiple languages at one's
disposal. This is also evident in businesses outsourcing their departments to Eastern Europe.

Language education on the Internet

The Internet has emerged as a powerful medium to teach and learn foreign languages. Websites that provide language
education on the Internet may be broadly classified under 3 categories:

1. Language exchange websites


2. Language portals
3. Virtual online schools

Language exchange websites

Language exchange facilitates language learning by placing users with complementary language skills in contact with each
other. For instance, User A is a native Spanish speaker and wants to learn English; User B is a native English speaker and
wants to learn Spanish. Language exchange websites essentially treat knowledge of a language as a commodity, and provide
a market like environment for the commodity to be exchanged. Users typically contact each other via text chat, voice-over-
IP, or email.

Language exchanges have also been viewed as a helpful tool to aid language learning at language schools. Language
exchanges tend to benefit oral proficiency, fluency, colloquial vocabulary acquisition, and vernacular usage, rather than
formal grammar or writing skills.

Portals that provide language content

There are a number of Internet portals that offer language content, some in interactive form. Content typically includes
phrases with translation in multiple languages, text to speech engines (TTS), learning activities such as quizzes or puzzles
based on language concepts. While some of this content is free, a large fraction of the content on offer is available for a fee,
especially where the content is tailored to the needs of language tests such as TOEFL, for the United States.

In general, language education on the Internet provides a good supplement to real world language schooling. However, the
commercial nature of the Internet, including pop-up and occasionally irrelevant text or banner ads might be seen as a
distraction from a good learning experience.

Virtual World-based Language Schools

These are schools operating online in MMOs and virtual worlds. Unlike other Language education on the Internet virtual
worlds schools are usually designed as an alternative to physical schools. In 2005, the virtual world Second Life started to be
used for foreign language tuition [11][12].

Foreign language English has gained an online presence, with several schools including Languagelab.com operating entirely
online, and the British Council which has focused on the Teen Grid. Spain’s language and cultural institute Instituto
Cervantes has an "island" on Second Life. A list of educational projects (including some language schools) in Second Life
can be found on the SimTeach site.

Acronyms and abbreviations

22
See also: English language learning and teaching for information on language teaching acronyms and abbreviations which
are specific to English.

 CALL: Computer-assisted language learning


 CLIL: Content and Language Integrated Learning
 CLL: Community language learning
 DELF: Diplôme d'études en langue française
 EFL English as a foreign language
 ELT English language teaching
 FLL Foreign language learning
 FLT Foreign language teaching
 L1: First language, native language, mother tongue
 L2: Second language (or any additional language)
 LDL: Lernen durch Lehren (Learning by teaching)
 SLA: Second language acquisition
 TELL: Technology-enhanced language learning
 TEFL: Teaching English as a Foreign Language N.B. This article is about travel-teaching.
 TEFLA: Teaching English as a Foreign Language to Adults
 TPR: Total Physical Response
 TPRS: Total Physical Response Storytelling
 UNIcert is a European language education system of many universities based on the Common European Framework
of Reference for Languages.

****************************************************************************************************

Language development
Since language development is the crucial part of the human cognitive nature, understanding language development is an
important aspect to understand the base and to recall its various components of linguistics. And as to their universality, the
cognitive aspect of communication in language is understood as similar among primates, non-primates, and human in some
aspects, and differs in other aspects in term of:

 Predisposed communication
 Photographic utterances
 Language acquisition
 Telegraphic utterances
 Morphosyntactic components
 Pragmatic components

Language development is a process starting early in human life, when a person begins to acquire language by learning it as it
is spoken and by mimicry. Children's language development moves from simple to complex[citation needed]. Infants start without
language. Yet by four months of age, babies can read lips and discriminate speech sounds. The language that infants speak is
called babbling.

Usually, language starts off as recall of simple words without associated meaning, but as children grow, words acquire
meaning, with connections between words formed. In time, sentences start as words are joined together to create logical
meaning. As a person gets older, new meanings and new associations are created and vocabulary increases as more words are
learned.

Infants use their bodies, vocal cries and other preverbal vocalizations to communicate their wants, needs and dispositions.
Even though most children begin to vocalize and eventually verbalize at various ages and at different rates, they learn their
first language without conscious instruction from parents or caretakers. In fact research has shown that the earliest learning
begins in utero when the fetus can recognize the sounds and speech patterns of its mother's voice.

Biological preconditions

Linguists do not agree on the biological factors contributing to language development, however most do agree that the ability
to acquire such a complicated system is unique to the human species. Furthermore, many believe that our ability to learn
spoken language may have been developed through the evolutionary process and that the foundation for language may be
passed down genetically. The ability to speak and understand human language requires a specific vocal apparatus as well as a
nervous system with certain capabilities.
23
Some evidence that language is biological includes:

 there are proven areas of the brain that are responsible for language production and comprehension (Broca's Area
and Wernicke's Area)
 during brain lateralization, there seems to be a sensitive period for speech production
 Linguist Noam Chomsky (1957)proposed that humans are biologically prewired to learn language at a certain time
and in a certain way. He argued that children are born with a Language Acquistion Device (LAD) [1]

Environmental Influences

"The behavioral view of language development is no longer considered a viable explanation of how children acquire
language, yet a great deal of research describes ways in which a children's environmental experiences influence their
language skills. Michael Tomasello (2003, 2006; Tomasello & Carpenter, 2007) stresses that young children are intensely
interested in their social world and that early in their development they can understand that intentions of other people." [1]

"One component of the young child's linguistic environment is (child-directed speech)also known as baby talk or motherese,
which is language spoken in a higher pitch than normal with simple words and sentences. Athough the importance of its role
in developing language has been debated many linguists argue it to have the important function of capturing the infant's
attention and maintaining communication. Adults use strategies other than child-directed speech like recasting, expanding,
and labeling:" Recasting is rephrasing something the child has said, perhaps turning it into a question or restating the child's
immature utterance in the form of a fully grammatical sentence. Expanding is the restating, in a linguistically sophisticated
form, what a child has said. Labeling is identifying the names of objects[1]

Social preconditions

It is crucial that children are allowed to socially interact with other people who can vocalize and respond to questions. For
language acquisition to develop successfully, children must be in an environment that allows them to communicate socially
in that language.

There are a few different theories as to why and how children develop language. The most popular -- and yet heavily
debated-- explanation is that language is acquired through imitation. The two most accepted theories in language
development are psychological and functional. Psychological explanations focus on the mental processes involved in
childhood language learning. Functional explanations look at the social processes involved in learning the first language.

There are four main components of language:

 Phonology involves the rules about the structure and sequence of speech sounds.
 Semantics consists of vocabulary and how concepts are expressed through words.
 Grammar involves two parts. The first, syntax, is the rules in which words are arranged into sentences. The
second, morphology, is the use of grammatical markers (indicating tense, active or passive voice etc.).
 Pragmatics involves the rules for appropriate and effective communication. Pragmatics involves three skills:
o using language for greeting, demanding etc.
o changing language for talking differently depending on who it is you are talking to
o following rules such as turn taking, staying on topic

Each component has its own appropriate developmental periods.

Phonological development

From shortly after birth to around one year, the baby starts to make speech sounds. At around two months, the baby will
engage in cooing, which mostly consists of vowel sounds. At around four months, cooing turns into babbling which is the
repetitive consonant-vowel combinations. Babies understand more than they are able to say.

From 1–2 years, babies can recognize the correct pronunciation of familiar words. Babies will also use phonological
strategies to simplify word pronunciation. Some strategies include repeating the first consonant-vowel in a multisyllable word
('TV'--> 'didi') or deleting unstressed syllables in a multisyllable word ('banana'-->'nana'). By 3–5 years, phonological
awareness continues to improve as well as pronunciation.

By 6–10 years, children can master syllable stress patterns which helps distinguish slight differences between similar words.

24
Semantic development

From birth to one year, comprehension (the language we understand) develops before production (the language we use).
There is about a 5 month lag in between the two. Babies have an innate preference to listen to their mother's voice. Babies
can recognize familiar words and use preverbal gestures.

From 1–2 years, vocabulary grows to several hundred words. There is a vocabulary spurt between 18–24 months, which
includes fast mapping. Fast mapping is the babies' ability to learn a lot of new things quickly. The majority of the babies' new
vocabulary consists of object words (nouns) and action words (verbs). By 3–5 years, children usually have difficulty using
words correctly. Children experience many problems such as underextensions, taking a general word and applying it
specifically (for example, 'blankie')and overextensions, taking a specific word and applying it too generally (example, 'car'
for 'van'). However, children coin words to fill in for words not yet learned (for example, someone is a cooker rather than a
chef because a child will not know what a chef is). Children can also understand metaphors.

From 6–10 years, children can understand meanings of words based on their definitions. They also are able to appreciate the
multiple meanings of words and use words precisely through metaphors and puns. Fast mapping continues.

Grammatical development

From 1–2 years, children start using telegraphic speech, which are two word combinations, for example 'wet diaper'. Brown
(1973) observed that 75% of children's two-word utterances could be summarised in the existence of 11 semantic relations:

Eleven important early semantic relations and examples based on Brown 1973:

 Attributive: 'big house'


 Agent-Action: 'Daddy hit'
 Action-Object: 'hit ball'
 Agent-Object: 'Daddy ball'
 Nominative: 'that ball'
 Demonstrative: 'there ball'
 Recurrence: 'more ball'
 non-existence: 'all-gone ball'
 Possessive: 'Daddy chair'
 Entity + Locative: 'book table'
 Action + Locative: 'go store'

At around 3 years, children engage in simple sentences, which are 3 word sentences. Simple sentences follow adult rules and
get refined gradually. Grammatical morphemes get added as these simple sentences start to emerge. By 3–5 years, children
continue to add grammatical morphemes and gradually produce complex grammatical structures. By 6–10 years, children
refine the complex grammatical structures such as passive voice.

Pragmatics development

From birth to one year, babies can engage in joint attention (sharing the attention of something with someone else). Babies
also can engage in turn taking activities. By 1–2 years, they can engage in conversational turn taking and topic maintenance.
At ages 3–5, children can master illocutionary intent, knowing what you meant to say even though you might not have said it
and turnabout, which is turning the conversation over to another person.

By age 6-10, shading occurs, which is changing the conversation topic gradually. Children are able to communicate
effectively in demanding settings, such as on the telephone.

Theoretical frameworks of language development

There are four major theories of language development.

The behaviorist theory, proposed by B. F. Skinner (father of behaviorism) says that language is learned through operant
conditioning (reinforcement and imitation). This perspective sides with the nurture side of the nature-nurture debate. This
perspective is not widely accepted today because there are many criticisms. These criticisms include that the perspective is
too specific, encourages incorrect phrases and is not entirely possible. In order for this to be possible, parents would have to
engage in intensive tutoring in order for language to be taught properly.

25
The nativist theory, proposed by Noam Chomsky, says that language is a unique human accomplishment. Chomsky says
that all children have what is called an LAD, an innate language acquisition device that allows children to produce consistent
sentences once vocabulary is learned. He also says that grammar is universal. This theory, while there is much evidence
supporting it (language areas in the brain, sensitive period for language development, children's ability to invent new
language systems) is not believed by all researchers.

The empiricist theory argues that there is enough information in the linguistic input that children receive, and therefore there
is no need to assume an innate language acquisition device (see above). This approach is characterized by the construction of
computational models that learn aspects of language and/or that simulate the type of linguistic output produced by children.
The most influential models within this approach are statistical learning theories such as connectionist models and chunking
theories such as CHREST.

The last theory, the interactionist perspective, consists of two components. This perspective is a combination of both the
nativist and behaviorist theories. The first part, the information-processing theories, tests through the connectionist model,
using statistics. From these theories, we see that the brain is excellent at detecting patterns.

The second part of the interactionist perspective, is the social-interactionist theories. These theories suggest that there is a
native desire to understand others as well as being understood by others.

Theoretical stages of morphemes in language development

Much of the research on language acquisition borrows heavily from the dominant paradigm in first-language of English
acquisition and focusing on the circumstances of how such linguistic structures are acquired. Many studies, for example, have
examined the acquisition of morphological features of language that are in place in native speakers. Among these studies, one
of the notable study has been conducted is by Brown in his longitudinal study of Adam, Eve, and Sarah, namely--the
“Brown’s fourteen morphemes” in their acquisition orders consisting of the stages on present progressive, prepositions,
plural, irregular past tense, possessive, non-contractible copula, article, regular past tense, third-person present singular
regular, third-person singular irregular, non-contractible auxiliary, contractible copula, and contractible auxiliary.

This theoretical and empirical work in language acquisition serves as the basis for understanding what it means by acquisition
of morphology in the patterns of universal grammar. And while long-standing theories describe acquisition of language
through an innate language acquisition device, an alternative approach that is gaining ground is the adaptation of linguistic
structures to the human brain, rather than vice versa.[2] On this account, language universals may reflect non-linguistic
cognitive constraints on learning and processing of sequential structure, rather than constraints prescribed by an innate
universal grammar. However, some researchers have defined this narrowly around the parameter of grammatical rules, others
around the abilities in accomplishing cognitive tasks, and still others around the social and communicative aspects of
language.

According to their parameters in similarities between syntax and morphology in their acquisition, in syntax, it is understood
that the mechanisms of UG and their role in language acquisition as consisting of a highly structured and restrictive system of
principles with certain open parameters by their cognition to be fixed.[3]

As these parameters are fixed, a grammar is determined, what is in turn termed SVO or SOV or VSO. In this theory, the role
of principles i.e., the linguistically invariant properties of syntax common to all languages is to facilitate acquisition by
constraining learners' grammars by reducing the learner's hypothesis from an infinite number of logical possibilities to the set
of possible human languages—the UG.

Thus, it provides an important context for investigating the acquisition of general cognitive and specifically linguistic
processes of morphology based on its parameters, for example, SVO language. This perspective however differs from its
comparison to which has provided by the study of first language (L1) acquisition in children and in adult (L2) learners. As
adult learners bring capacities to bear the language learning process that are both similar to and different from the capacities
of children, the role of parameters which express the highly restricted respects in which languages can differ
morphosyntactically is to account for cross-linguistic syntactic variation. In principles, it admits of a limited number of ways
in which they can be instantiated, namely those allowed by the parameters specifying possible variation in the order of
morphological acquisition.

In addition to the acquisition pattern in “Brown’s fourteen morphemes”, in general and in all languages, it is agreed that the
state of knowledge of morphological awareness and learning begin by overgeneralization of various lexical entries. In terms
of the causative factor, it is also understood as a universal pattern in children’s innateness, and the variations as the seriate
aspects of linguistics in acquiring morphology. In this approach, children’s knowledge of tense morphology were examined
using elicitation and grammaticality judgment tasks to predict variability in the morphophonological expression and
knowledge of tense in developing grammars. It has long been noted that the acquisition of tense-marking morphology is a
vulnerable domain for language learners across all languages in acquisition contexts, For example, children in English
26
language typically enrol in three gradual stages to produce tense morphemes accurately; applying morphemes correctly but
without knowing--the stage 1, applying analogy--the stage 2, and understanding the morphological differences--the stage 3.

And some other factors also have shown similarities in morphological acquisition across languages, like linguistic
markedness of syntactic operations not involving forms with tense features like optional infinitives in one aspect, and
linguistic markedness of syntactic operations involving forms with tense features like negation, modalities, passives, and
coordinations on the other aspect.

27

You might also like