Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 26

ENGLISH LANGUAGE AND TRANSLATION

What is English?
Once you know so very well something, it is difficult to describe it because you can’t pinpoint what it is.
There are some things we take for granted when we consider something with which we are really familiar
with.
There are dynamics we take for granted, one of which is who attempts to write.
Our idea of what is English is very often based on common grounds or things that are taken for granted. For
example, it is perfectly normal for Joyce to pass an English exam even if he writes without punctuation, but
that’s not the same situation for an anonymous.

The definition of what is English is always going to be approximative. To try and access what is a language
we have two possible ways: the diachronic access (I am the daughter of…) or the synchronic access (I am x’s
friend). Hence, one way to describe what is English is describing its history.

Generally, English developed in Britain (Midlands, Southern areas) (450 AD-late 19 th century), it is spoken in
those territories controlled by Great Britain (British Empire) and the United States, and it is a language of
international communication (Crystal 1997).
English is the language or the nationality of “England”, while British refers to the nationality of UK which
includes Great Britain (England, Scotland, Wales, Northern Ireland) and British people for Britons.

Diachronically speaking the English language developed in four periods:


 Old English (450-1100 AD)
 Middle English (1100-1500)
 Modern English
o Early Modern English (1500 to 1650)
o Late Modern English 17th to late 19th
 Contemporary English (for the 20th century onwards)
The language develops through these four steps which are arbitrary. The periods are based on the
characteristics that texts share.

Standard is the term that you start to notice when they try to outline this kind of progress of language. At
the beginning the language was very much varied and as it progressed you can identify a sort of standard.
At one point people start to talk about standard rather than just English. In order to make sense of a
phenomenon that is so complex you need to decide certain criteria on the basis of which you analyse this
phenomenon. One of these criteria is that it is easier to consider the written language than the oral
language both because it survives through time and because it tends to be more regular than the oral or
spoken variety of the language. Speaker tends to consider their way of pronouncing a language as correct,
they tend to be idiosyncratic. Text is different because it needs to travel through space and time.
The oral, improvised form is more problematic, while the written thought, programmed, and revised is
more linear.

Other features of the written mode are:


 Conceptualization of the language: graphic/visual representation (spelling, inflection, derivation,
etc.). When we speak there’s no distinction, there are no spaces between words. It is much easier
than the oral language.
 Semantic autonomy (independence): less context-dependence, semantically self-standing.

1
 Prestige of written (literary) text: written texts = quality in astatic terms (fictional texts) and
pragmatic terms (non-fictional texts: law, science, education, etc.). In oral text what we want to do
is try to create a sort of impact, you want your interlocutor listen to what you are saying. In order
to be impactful, you need to disrupt some convention, while written texts are easily processed as
long as they are linear. Sometime is not what is written, but the way in which it is written. It can
also be used to measure the utility of the text (the Constitution is considered important because
they are written in a logic way.)
 Disambiguating orthography: disambiguating homophones (right, write, rite, wright or met, meet,
meat, mete, etc.). This is very relevant for children in England and French because it is helpful to
differentiate words with the same pronunciation.
If we use the idea of standard as the defining feature of the language, we wouldn’t consider as English
some varieties of it.

We can also rephrase synchronic as cross-cultural, confronting the English talked in Great Britain with the
one talked elsewhere. It has to do with the parting of British English from American English in the late 18 th
century. The language that was used in the American colonies after some decades it started to be
“corrupted” because people that used the language there were interacting with people that used different
languages. So that kind of language got bastardized (contaminated) by other forms. At the beginning
people in the colonies were aware of the divergence of their language and at the beginning they tried to
mend this, but later it became something distinctive of their use of the language, it became identifying.
From that moment on the idea of English started to be problematic because if you have two different ways
to talk in English, there’s no more a single definition of English.

For anglophone people it started to be normal to think that language was above us and independent from
us. Language can be used as we want as well as it allows us to communicate. When you start breaking
down the unity in two parts, it makes you assume that it can be divided into more parts. This was the
starting point of the idea of language which could be not limited to one country or one culture.

Another possible comment that could be made is that America could have come up with their own
language but every attempt to come up with new languages came up in failure. American English was so
successful that today there’s a tendency mor toward the American English than the British English. That’s
because USA was, even back then, a leading economic and political power.
An example of the political and cultural power can be seen in movies which were produced after World
War Two. Usually there are characters who speak American English, which was not translated, so you were
kind of forced to become familiar with it. Now it’s not like that anymore because movies translated to
Italian are fully translated in Italian, there aren’t character who talked in American English. It was not
intended as such, but it was a way to get Italian audience familiar with this language. The weird fact is that
the same thing does not happen for British English.

Another element that contributed to the spread of American English in other countries and cultures is the
openness of the UK and Europe culture to US culture. After World War Two, when you think about music or
movies, they all come from the US. There wasn’t a resistant perspective in Europe, so we kind of imported
them and we still go on. For example, there are Italian groups which make music like rock and roll which do
not belong to Italian’s culture.
The openness to other cultures can also be seen when you watch TV; every single thing that Joe Biden does
is on the news, even though it is not relevant at all for us. However, we perceive it as important to us even
though it really isn’t. Hence, mass media and computer-mediated communication spread the use of
American English language.

2
The fourth reason on the basis of with we can justify the spread of American English is clearly that the
number of Anglo-American speakers outnumber British speakers. If you compare the two countries, there’s
a huge disproportion. American English has the advantage of not having the differences in original dialects
or, more precisely, the differences in regional dialects are not so many in America than it is in Great Britain.

Anglophone people have a more open approach to the language a to variation of the language. In fact, if
they use a form, they consider it as standard, if they do not consider anymore, even if there was a rule
about it, it is no more considered. In general, British people tend to be more conservative than Americans.

Can we say that American scholars tend to be much opener?


Yes, they are. Variates of English originated in the colonial era (i.e., Indian English, Philippina English,
Singapore English, African English, etc.) and used in the 20 th century for international communication.
Instead of saying that that kind of language is not really English, they came up with the idea that they are
variation of English. American scholars claimed that they had the right to say that their language was a
variation of English.

New English from a certain point of view could be considered as full of errors, but they are no more
mistakes, they are variation of the language. The English spoken in Scotland and the one spoken in London
is not transparent, however, it is still considered a variation of English. Therefore, it is correct to call the
English used in India as a variety of English as well.

In order to understand them, there was an attempt to organize the different varieties of English. According
to Kachru’s model (1985) they varieties could be organised in concentric circles:
 The Inner Circle: English primary language (i.e. UK, USA, Anglophone Canada, Australia, New
Zeland).
The inner circle is the norm-providing language.
 The Outer Circle: is the English as the language of institutions used in former colonies and as
second language for a multilingual setting. In this circle speaker already have another language, but
they also know English very well or as a second language. However, English in those contexts is not
used in domestic contexts. These kinds of languages are norm-developing, which means that they
institutionalize their own characteristic features.
 The Expanding Circle: is the external circle. It is represented by English us as international language
in those countries in which English has no institutional status (i.e. Italy, China, Israel, Japan, former
USSR, etc.). These kinds of languages are norm-dependent, which means that, in order to be sure
to be talking correctly, we have to check the rules that we find in grammar books which are usually
published in anglophone countries (inner circle).
This kind of organization is based on the quality of the language: the core is the best possible
representation of the language and the further we move away from the circle the “”
Worse” quality or prestige we find.

Starting from the 19th the language in terms of quality was no more used because the language is a tool, so
it started to be used a classification based on the intelligibility of the language. Hence, Modiano’s model
(1999) classified the varieties of spoken English in terms of intelligibility around the globe.
 The core is English as an International Language (EIL). This means that it refers to the language
spoken in the expanding circle according to Kachru exactly because they are norm dependent.
 The second circle corresponds to the outer circle in Kachru’s model. There are varieties containing
(common) features easily recognisable

3
 The outer area contains varieties containing idiosyncratic features not shared by speakers of other
varieties (i.e. British English, American English, Canadian English, Australian English, New Zeland
English). We don’t perceive British and American idiosyncrasies because we study those
idiosyncrasies, and we find them as a norm. An idiosyncrasy is, for example, the -s in the third
person.
There has been a shift from a linguistic point of view to effectiveness.

Another model is Görlach’s (1998) which introduces the taxonomies and acronyms which are still used and
considered for given.
 English as a Native Language (ENL is more or less the Inner Circle): it corresponds to native of
dominant language for all types of communication (England and USA, but also Jamaica, South
Africa, etc.). Dominant refers to other idiosyncratic varieties of the same language, it is not
dominant over another language but dominant over another variety.
 English as a Second language (ESL is more or less the English used in the Outer Circle): is the
institutionalised non-native varieties of the language for international and intra-national purposes
(law, administration, media communication, schools, etc.). It is spoken in former colonies, English
as an international lingua franca, India, Nigeria, etc.
 English as a Foreign Language (EFL which more or less corresponds to the Expanding Circle): is
acquired (a language is taught so you learn it through rules and you are conscious about the
language, it doesn’t mean it was assimilated which has to do with hearing other people using it and
imitate them), for academic and specialized communication (most European countries, China, etc.).
We learn dialect through assimilation and not through learning.

English International Language is considered as a register for professional and academic purposes. EIL is an
alternative to Standard English (Modiano 2001): culturally, politically, and socially neutral (understood by
L1 and L2 speakers).
The first scholar using the term English as a Global Language is Widdowson in 1997 while considering the
English used by people of any ethnicity in international settings (more or less ELF). Toolan in 1997
considered that the language used to communicate in an exhaustive way was no more English and could be
called only as Global.

Another attempt to classify English language is the idea of English as a Lingua Franca, which is much more
workable than English as a Global language because it concerns the communication and not the geography.
Lingua Franca is a contact language used by people who did not share the same dialect. It was a third
language accessible by the two parties and that helped to overcome problems of intelligibility. In the
Vatican in the past priests used Latin as the lingua franca because they knew it.
However, according to the definition, a language is a lingua franca only when used by people who are non-
native. Anyway it seems only a theoretical problem because even if a speaker has English as a native
language, he or she will adjust his or her way of speaking according to the level of his interlocutor.
Lingua Franca was considered as a trade language, while today English is used for specific and/or practical
purposes which are very professionally based.

In reality ELF or, more properly Lingua Franca English (LFE), contact language used by native and non-native
speakers alike but which functions as an independent system which as such has no real native speakers. It is
a high degree variation both subject-related and user-related (being used by speakers with varying levels of
competence).
ELF is a simplified and reduced language:

4
 Reduction: fewer linguistic options – for lexis, morphology, syntax, style and register. For example,
pigeon and creole languages.
 Admixture: it is more problematic in terms of offering tools to classify the language. It is due to
mother-tongue transfer. It depends on the native tongue of the person who is speaking.
 Simplification: for example, regularization (i.e. no irregular past tense), loss of redundancy (no
third-person -s), lexical and morphological transparency (i.e. eye doctor for optometrist).

Standard English
The concept of what we mean by the term English is terrible complicated because there are plenty of
labels. Given this kind of complexity of terms, the only possible way out of this mess is considering the idea
of standard English. Standard English is supposed to be a sort of overarching term (or umbrella term)
including different dialects or varieties of the language which might share some recognisable features.
Hence, Standard English is the most transparent way to refer to English in a easy recognisable way. In
realty, there isn’t something like Standard English, it’s only a different angle of approach.

The idea of Standard was introduced at the beginning of 20 th century (1909) by George Krapp in the book
Modern English: Its Growth and present Use. Hence, it is about the present use and how the language
became like that. This book as a descriptive approach.
According to Krapp, Modern English is an idea that is based on the usage that we make of the language, so
it is not a paradigm, but the actual and contextual use, the usage that people do of the language.
Therefore, the rules have little or no impact. A second feature which is distinctive of this Modern English is
that than idea of correctness is more appropriate than the idea of appropriateness (context vs paradigm).
It is better to focus on spoken language than written language because the latter is a refine of the use of
the language, so it is not the most unfiltered way to use the language.

When we know that these are the starting point from which we describe the language, it is not English as it
used to know, so Krapp introduced this kind of label. By the label standard Krapp referred to the conscious,
legalised use of the language, which doesn’t mean that it always is grammatically correct, it can be non-
grammatical, but contextually acceptable. A standard can be continuously updated and refreshed by the
creation of new forms considered as “good English” or effective English. Those new entries can be non-
grammatical but nonetheless effective. Hence, this is a completely new way of considering the language.

The idea of standard has been used later very much consciously in different studies. The following two
studies show how there are different approaches to define what is standard language.
The first one is Sterling Leonard (1932) who studied the Current Use of English, dividing the types in literary
English, Standard English and naïve English. According to him, standard English is cultivated colloquial
English and it is not used neither in literature neither by popular nor uneducated people. Literature
language do not need to be contextually correct while standard language needs to be. Naïve English is the
English talked by people who apprehended English trough imitation. This type of language contains errors
like the double negative. Naïve English is spoken by people who are not aware of grammatical rules.

The second one is Charles Fries (1940) and he focuses in American English Grammar. He divides the
language in Standard English, Common English, and Vulgar English. Standard English is the one talked by
college graduates, scholars, newspapers, editors, educated professionals, etc. Common English is talked
neither by professional nor unskilled labourers (businessman, army officers, etc.). Vulgar English is talked by
uneducated and unskilled labourers. The difference between standard and common English is that the
second one is tasked-based while the first one is discourse-based. Language should just adjust to the

5
purpose you use it. So, the standard is the linguistic common ground that people have in order to
communicate.

There have been plenty of studies which have been trying to find a way to define or explain what standard
is. Some definitions are: “it is the grammar of educated English current in the second half of the twentieth
century in the world’s major English-speaking communities”, “it is the common core that is shared by
standard British English and standard American English”, “it is a variety of English […] characterized by the
absence of socially stigmatised forms such as double negatives”. By socially stigmatised forms we mean
idiosyncrasies or mistakes. Stigmatise forms would affect your interpretative process because you notice
the mistake or what it is not standard, and you don’t process the meaning.

Other definitions are: “it is a dialect [without] associated accent; […] it is a social rather than a geographical
dialect […] more widespread than any other dialect of English”, “the most prestigious variety of the
language based on the spoke or written norms adopted by educated native speakers”, “the kind of English
[1] written in published work and generally [2] taught in English-speaking schools. It is [3] spoken in
national news broadcasts [RP] and where published writing is most influential, in [4] educated context. [It
is] the variety [5] described in grammar reference books”.

When you need to teach general standard English, it means that it has both expressions from British English
and American English. It is an arbitrary decision deciding which variety you teach. There are two
formulations:
 Standard Southern English (SSE) is the variety of British English which appears in many course
books and exams for learners on English. Hence, it is not British English.
 General American (GA) is used in teaching and examining all over the world. Therefore, it is not
American English.
There are some people who are not comfortable with the idea of standard because it implies the idea of
abnormal, which conveys a negative connotation. Hence, scholars proposed variants to it:
 ‘General English’: is not associated to British English or American English, but it is international used
(i.e. less restrictive than Standard English)
 Literate English: is not as literary/learned but the opposite of ‘illiterate / unsophisticated’. It is a
written variety and a more ‘transnational’ English. Therefore, it is not restricted to immediate
utilitarian contexts (ELF) and it serves general and global needs.
So, even if you don’t like the idea of Standard, you have to come up with something.

Standard or model
The model is usually considered to be the most prestigious, so scholars when they use the term standard,
they didn’t mean to provide a hierarchy between preferable and non-preferable forms, but they wanted to
describe what it was common and what it was not. The model is something which is close to the utopia and
the ideal, whereas the standard is the idea of the average. The model is clear-cut and definite while the
standard is vague and general. The idea of model is completely removed from time and place while
standard is current. The model is very often fabricated or established ex-post whereas the standard is
abstracted from the usage of language. A model does not include the possibility of exception: either it’s a
model of it is not. Standard is flexible, workable, and constantly updated, so standard might include
exceptions which might be in the future accepted as part of it. It changes on the basis on which the
language is actually used.

What is Standard English?

6
1) It is a dialect, which is any kind of usage of the language that is characterised by fairly permanent
characteristics
2) It is established on the basis of:
a. Organic/unconscious reasons: “in all language communities one dialect is singled out for
further development […] (often as an accident of fate, such as its use in the capital city and in
centres of education and government)”
b. Arbitrary/agreed upon reasons: “sometimes standardization is undertaken deliberately, for
example as part of a country’s process of attaining nationhood”.

Arbitrariness can be read according to two levels:


a) in terms of power, authority, and control
b) in terms of practical reasons, workability, and feasibility.
a) In the first case the language is fixed by users in a position of power [academics, publishers, writers,
educators, and others] and widely accepted. Since these are discourse-based domains, we know that there
are topics or subjects that can be referred to only if they are talked in definite way.

b) Agreed upon reason that contribute to the realisation of the standard language in terms of word choice,
word order, punctuation, syntax, and spelling. In some cases, people deliberately decide to come up to a
set of norms and rules for reasons of:
 prestige: if you manage to systematise your language as the same way as classical languages or
foreign languages considered prestigious are systematised, this means that your own language is
as prestigious as them. If the English language can be systematised by the same rules of Latin
language or French, it means it is as prestigious as them. It creates analogies between languages.
 cognitive reasons: the standard is needed for a language to be easily understood. One possible way
of making sure a language is easily understood is emphasising homogeneous traits and avoid
idiosyncrasies. You need to avoid the distracting impact of local variants because they are usually
at work in those stigmatised forms, and they hinder interpretation. Another similar reason is that a
relatively small percentage of the British popular speak Standard English (possibly as few as 10 per
cent), virtually everyone is aware of the standard variety because it is used so widely in the written
mode as well as in national broadcasting. Hence the use of standard is not only for prestige reason.
 practical purpose: standard language is useful to simplify ‘understanding’ of language mechanisms
for learners (pedagogical purposes) and to transfer to another language (translation purposes,
both translators and machines).

Problems with the standard


Standard is not a selection bug at the end of the day you need to decide what is more understandable.
People consider the standard as more logical, regular, and beautiful, but in reality, it doesn't concern any of
these qualities. The standard is to be preferred over the nonstandard because It is much more regular
because it stands upon rules, however, they are arbitrary. Moreover, the concept of logic has to do with
cause and consequence, therefore the standard cannot be considered as logical or illogical. In the end,
there's no objective criterion for 'beauty'. 
Standard English is used by the minority of people who belong to the upper middle class, therefore there
are people against the use of it. There are some scholars who claim that standard English is a myth which is
true because it doesn't correspond to the language used every day. However, this is not a problem because
we need something which can be easily recognised. Being a myth doesn't disqualify it. 

Standard language ideology


We know that especially in countries in which only one language is spoken, everything is formalised in that
language. This is called monoglot ideology and they have a monoglot mindset which regards standard

7
language monolingualism as normal. It is not a problem in monolingual parts of the world but it is in
countries with more than a language. 
The ideology of development or harmonisation consists in favouring the standard, for example in colonised
countries in order to climb the social ladder people need to accustom themselves to the standard language.
The other ideology is the one of decolonisation or differentiation. This happens in ex colonies which
decided to develop their own language. 

Once we have the idea of the standard and we consider it to be good, everything which diverges from it is
considered as bad. Therefore, there's a negative judgement of the people who speak in a different way
from the standard.
By promoting standards, you promote the idea of what is correct and respectable, and consequently you
have to consider what is right and what is wrong, therefore you influence people. The idea of standard
English that can control common sense could be there but it's not a superimposition of the language
because we just need to align to differences. The idea of political correctness is a type of control of
common sense.

According to Curzan (2014) there are four types of prescriptivism:


 Standardising prescriptivism: enforcing 'standard' usage (commonality Vs idiosyncrasies)
 Stylistic prescriptivism: distinguishing between points of style (better Vs plainer)
 Restored prescriptivism: restore earlier usages (Vs decay) (more Vs less prestigious)
 Politically responsive prescriptivism: promoting inclusive, non-discriminatory, politically correct
forms (right Vs wrong; good Vs bad)
According to Mooney and Evans the first three can have negative effects (hindering social mobility) on
people while the fourth one can have a positive effect on society. However, those considerations are only
based on common sense, there are no proofs. 

Political correctness started to be used because in the past it was needed because given terms had
acquired connotations that were so negative that they would connect those negative features directly to
the object. Political correctness is usually based upon what a group of scholars considered correct or not. It
is not political correctness, which is wrong, but it's wrong the way it is used.

Usually, the approach of observation has a more utilitarian and practical way to look at things. An example
of this approach is the one used by medicine with diseases: even though they are pretty bad stuff, you need
to study them to know how to deal with them. This is science: the only possible thing to do with dealing
with thinks you don’t like is to learn about them and learn how either bypass or contrast them even in
“violent” way (surgery).
On the other hand, philosophy is only speculation, even though it is very stimulating because every single
idea and concept rages possibilities of interpretations. Therefore, there are so many approaches to reality
form the philosophical side and all the schools of thoughts can be absolutely against one another. There are
completely different approaches to study the same phenomenon, which is not the case of medicine which
only adds along a given line. 
Therefore, very often there are essays discussing the representational justice of the language and they can
go on forever discussing it because it is incredibly subjective. People are excluded, but that's the domain of
linguistics. 

Why standard English? Benefits


1. Standard is important because its idea overrules the grammaticality. Therefore, the standard is
always considered as grammatically correct, even though it isn't always the case because there's no
grammatical check. Very hardly in spoken context a perfectly grammatical language is used.
However, standard English is used to enlarge the competence. The main difference between vulgar
and standard language is that the former is essentially poverty stricken with relatively few options,
whereas the latter uses the rich and varied resources of the language. The obvious implication is

8
that schools should provide exposure to hand practice by using the full range of options of the
English language.
2. Standard language helps with comprehensiveness, to easily understand people using that language.
According to the National University of Singapore, standard English should be understood by
people all over the world. This does not necessarily mean that what is not standard is non-standard
and should be stigmatised. After all, we do not communicate to celebrate the form in which we
communicate but to convey information.
This debate upon the standards is only applicable to the English language as the idea of standard Italian is
only spoken in Italy and not all around the world, 

GRAMMAR OF ENGLISH
Unlike morphology, which is self-explanatory, the entity of grammar is not so transparent. We have the
perception that grammar is a sort of imposition, but that's the wrong way of conceiving it. The anglophone
approach to the language is very different from Italian one.

There are four different concepts that can describe what grammar is: rules, usage, standard, and context
dependency.
1. When the focus is on the rule, we have the prescriptive grammar and the normative grammar.
2. When we focus on a set of generalisations that describe sentence structure, we consider
descriptive grammar and structural grammar.
3. When we consider the unconscious knowledge of language, we consider the generative grammar
and the transformational grammar.
4. When we consider the ability to adjust to context and real-time limitations to produce the
language, we have the performative language, rhetorical grammar, discourse analysis.

Example: “mancano cinque minuti”


Perspective approach: “There are about five minutes left” it’s okay, while “There’s about five minutes left”
is considered wrong because there’s no concordance between the subject and the verb. In this perspective
either the sentence is right, or it is wrong. It is a simpler approach to the language, which is the one used in
teaching the language, even though it is a restrictive approach. From this point of view, it seems that
grammar is an external imposition. Nonetheless, in both cases the language is intelligible.
Descriptive approach: it simply describes the language; it describes things as more formal or informal.
“There are about five minutes left” is considered more formal, while “There’s about five minutes left” is
much more informal.
Generative approach: It explains why it is said “There’s/are about five minutes left” rather than
“There’s/are left about minutes five”.
Performance/pragmatic approach: In what contest “There are about five minutes left” is more effective/
appropriate than “There’s about five minutes left”.

Grammar is just a series of regularities, which are the principles which are common, frequently found. You
learn something the way other people do the same thing. Hence, whatever you do, it’s understandable is
understandable because is on the basis of something that it commonly shared or understood. That’s the
idea of grammar etymologically speaking. Then grammar is almost always exclusively applied to language,
so it appears to us as a set of rules.

Grammar if you just look at the definition on grammar books is a series of regularities which are allied to
syntax, morphology, and form-function relationship (traditional, form-based). You might also have some
specifications that involves:
 typological and comparative concern (descriptive, register-based)

9
 notional categories besides formal ones, how to express doubts, hesitation, certainties, etc, it is not
just how to communicate the content (cognitive)
 communicative situation, it deals with the purpose of a particular exchange because according to it
there are different forms, register, expressions, etc. (pragmatics)
 varieties of pluri-centric languages, when you study a language and its varieties (ethnography,
sociolinguistics).
In coursebook there can be emphasis in different aspects even according to the type of audience. For
example, lawyers will have to talk to the judge, for them it is more useful to consider a pragmatic approach
to grammar.

Appearance of grammar
There are some external conditions that motivated the birth of grammar:
 the real function of grammar according to the first approaches to it has to preserve and prescribe a
given state in the development of a language for cultural purposes. They did not want to lose the
kind of prestige of that language at that time, like it happened for classic Latin.
 To demonstrate the maturity and value of a nation’s mother tongue in terms of prestige.
 For language policies of standardization in terms of usability (to be more used). It can involve, for
example, the spelling.
 Foreign language teaching, mother tongue teaching and translation. It is very relevant today most
of all for practical purposes and transfer.

There are also practical needs or aims of grammar:


 To form “scribes” to effectively “transcribe” text by the politicians (practical/utilitarian aim).
Egyptians and Phoenicians, for example, were the first that understood that it was important to
have something written down and codified in order not to need someone that shared its memory,
which could be corrupted.
 To understand the functioning of the mind (scholastic philosophers): categories of grammar
influences categories of logic, epistemology, and metaphysics (speculative aim). We have the form
of the future in the past because we have the idea of it. The third conditional (impossibility) can be
used because we have the capability of thinking about what could have happened but didn’t.
Moreover, we tend to structuralize our reality on the basis of our mind and language.
 Foreign language learners, translators, interpreters (practitioners): consolidation of linguistic
awareness for interlinguistic purposes. It is a way of giving foreign people the basis of the language.
Learners what to have a prescriptive use of the language because they want to know which form of
the language to use and not why.

It is not possible to provide a definition of grammar because it is too complex to be defined. The definition
needs to rely on some quasi-theoretical (in reality only ‘traditional’, traditionally established) concepts
about the nature of language. These have become suitable, appropriate, comprehensible, and usable.

Historical perspective
The Graeco-Latin tradition introduced the alphabetic writing system which connects each sound with a sign
or a group of signs. Moreover, they introduced the concept of “part of the speech” (Plato: noun, verb;
Aristotle: conjunction (linking particle); Dionysus Thrax: articles, pronouns, prepositions, participle, adverb,
gender, number, case, mood, voice, number, person, tense). Philosophical debate between nature and
convention (imitative or arbitrary) and between analogy and anomaly.
The consequence of this period is a systematic approach language change which is conceived as language
decay, which means mistake.

10
The middle-ages (500-1500) concerns with a further development of Latin because some the same rules of
it were applied. Latin is considered as a model language (language learning, literature, church services and
administration). Aelfric’s Latin Grammar (1000) is a grammar of Latin which was used applied in English
Grammar in order to fix notions useful also to understand English. Therefore, there’s a tendency to base
English grammar on Latin models.
The consequence of this period is the importance of the model and the rule, even if we know that it is
arbitrary.

During Renaissance (1500-1650) they realised the importance of going back to the original sources rather
than just trusting reports of events. There take place studies of other classical languages (Hebrew, Chinese)
and vernacular languages (not literary or learned). Vernaculars were used for scholarly and scientific
publication (grammar of Spanish, French, Italian, and Polish). With the advent of the printing press there
was a spread of literacy and demand for education. Moreover, for foreign-language education were created
texts, grammars, and dictionaries.
The consequences of this period were the need to establish linguistic standards to be reproduced (printed)
and recognised and especially the attempt to elevate the statute of the vernacular languages
(standardization of things).

In the Rationalist grammar (Port Royal school 1637-61) The Port-Royal grammarians attempted to write a
grammar containing all the properties common to all languages known at the time – a universal grammar
accounting for languages universals and language-specific variation.
The main assumption is that id language’s function is to communicate thoughts, then the speech must
reflect the structure of the thoughts being expressed (rationalist approach). It is just a mirror between the
language and the mind.

In synthesis, with very few exceptions the study of language was still largely pre-scientific
(tradition>theory). English was approached as descending from Latin (and Latin from Greek). There was no
understanding of the process of language change (change was equated with corruption and decay). There
was no understanding of the relation between the written language and speech. Speech was an imperfect
representation of writing.

After the period of Renaissance there’s the period of Civil War and Restoration (1650-1700). In History
there’s 1642-1649 the Civil War with King Charles I, from 1649 to 1460 Puritanism (Oliver Cromwell), and
from 1660 to 1685 Restoration. For the language during the Puritanism period there was the desire of
regulation and control, the idea of the “morality” of language and that the past was an era of linguistic
purity. The idea was that if you are able to control the language, you can control thoughts which is
something pretty similar to the concept of “political correctness”.

A further development is represented by the Age of Reason (1700-1750) which is a sort of reaction to what
was going on before. Language Is considered as an element linked to some form of prestige, therefore, it is
worth studying. In fact, during this period of time there were several authors that wanted to promote
something (institution, academy) which could to something to promote the language. They wanted an
institution that could standardise the language reducing it to a series of rules like Latin. However, they are
not just rules, but they are regularities, since they are regular you interpret them like they were rules, but
they are not. The idea of rules clearly comes from the approach to Latin. The second purpose at the basis of
those institutions was the need to refine English, removing its flaws or imperfections, returning to an earlier
‘purer’ state. They wanted to have a model more than a standard language. For someone the model should

11
be Chaucer’s language, for others Shakespeare’s and other ones the Pre-Restoration language. The latter
aim of those institutions was ascertain English, consolidating, fixing, establishing the language minimising
its mutability. This was considered necessary for English people to consider English a fully-fledged language.

In the late 18th century, the first systematization of language started to be designed. The first grammar
book in contemporary terms. This is the starting point of grammars and different approaches towards
grammar. From this point the idea of grammar in the anglophone places is divided.
One of the major steps for the conceptualisation of the language is made by Robert Lowth in 1762 in Short
Introduction to English Grammar where he introduces a prescriptive, deductive approach according to
which the rule led to a good usage. This approach was so popular that it became the approach through
which English people started studying their own language. It was so popular that different approaches were
allowed but considered as eccentricities. Prestly and Campbell, for example, used a descriptive, inductive
approach to the language, which meant that they inferred rules from the observation of good usage or
writing. Campbell wrote the Philosophy of Rhetoric, which was not considered about the language but the
rhetoric.
In this period of time those two approaches started to be populated by writers who adopted one of these
two approaches: descriptive or prescriptive.

According to Robert Lowth the prescriptive approach was better because English people were inclined to
be more conservative and traditional, so much so that they still have drinking tea and punch. Th distinctive
traits stated by Lowth is that grammar was a set of rules that illustrated what is wight and what is wrong,
which was based on personal judgement and intuitions. Lowth was a scholar and know what he was talking
about, but still, there’s an element of subjectivity in language.
Robert Lowth’s idea of rules was taken from the approach of Latin and its consideration as a model despite
the structural differences with English.

From a descriptive approach to the language the good usage should be national, reputable, which means
that there should be different types of language according to the field we are talking about and present
which means that it is mutable in the sense that it can be subjected to possible changes. Since the language
should be mutable, changes are welcome, and they are not instances of decay like the perspective
approach considers them.
The prescriptive approach pedagogically speaking tends to be preferred to the descriptive approach
because it simplifies things in pedagogical setting giving a clear picture of the language.

In the US there was a clear political claim about the fact that the type of English that they spoke was
different from the one spoke in Britain. Since the American English had no history, it considered the
language used in the present which was difficult to be confined by a series of rules, therefore it was used
the descriptive approach. In the US variety is an alternative while in the UK a variety is considered an error.
The descriptive approach ascertains present practice, which means to observe and report the national
practice. As an independent nation, US did not want to use Great Britain language as a model because it
would consider American English as a language decayed, therefore, there’s a prescription about what not
ot do when using American English.

Noah Webster in An American Dictionary of the English Language (1828) established that no double
consonants should be used in continuous forms (AmE traveling vs BrE travelling), the -or spelling in forms
like behavior (British English behaviour) and the -ize spelling in forms like materialize (British English
materialise).

12
Like in Britain, even in America there’s a descriptive approach even if there are scholars who push forward
to the prescriptive approach, but they were not really popular. They based their studies of language upon
what Lowth did in his approach to language and trying to apply it to the American English language.
However, American English has no tradition and do is not so conservative, therefore they needed to think
about something else.

Grammatical descriptions
On one hand, scholars focussed on the language prescription which is grammar and the job of
grammarians. On the other hand, on the other approach scholars focussed on description which is the
domain of linguistics and is observed by linguists. When it comes to linguistics, there are plenty of approach
to the language because there are different angles from which you can look at it. That’s why linguistic is
very articulated and will always be there because it observes what is going on in the language. This is
especially true for the foreign teacher in US because they observe the language in order to maintain
themselves updated about the continuous changes that take place in the language.

The historical comparative approach it’s the bridge that helped us to consider valid other approaches to
make sense of the language. They started collecting and studying authentic material. While prescriptive
approach could only be based on grammarians’ intuition, when it comes to the descriptive approach, they
needed to collect linguistics data which is much easier if you do it in a comparative mechanism (more
frequent vs less frequent). These descriptive approaches, since they ignored Latin as being a model, they
started to consider the idea of not considering Latin not as a rebel movement but as a way that made them
understand that different languages had different origins.

The first descriptive approach is the one of George P. Krapp (1909) who starts considering the possibility of
having a descriptive approach and in order to do so there’s the need of having adequate terms to consider
it.

Structuralism
De Saussure (1916) did not write about English, but this is the starting point of modern linguistics. In his
book Course in General Linguistics, he divides the chronological aspects from the synchronic aspects, which
was not common in the past. De Saussure considered that the two types of approaches were equally
relevant because the diachronic linguistic describe the stages of a language, while the synchronic linguistics
considers the language as a self -contained system. According to him there are two levels of the language:
langue (paradigm, the abstract knowledge of the language) and parole (contextual realization of the
langue). Chomsky would call them the core and the surface.

American Structuralism
We are in the domain of linguistics and in the approach to linguistics. Bloomfield (1933) considers language
not in itself but as a ‘habit’, a response to a stimulus by imitation, memory, and association. Human
behaviour is when you get a stimulus from an external source and that stimulus enhance a certain
response. On the basis of this, language can be analysed on the basis of behaviour. According to Bloomfield
we tend to use language in terms of imitation and reinforcement, which means that hearing people use
certain words in certain context, you yourself start to use them and they become language to you.
These kinds of habit are going to be stored in your mind so you would recognise some stimulus by
association, therefore you would recognise similar stimulus and respond to them in a certain way. The
possible consequence is that the mind is not exactly unstudiable.

13
A possible counterclaim is that repeating what we hear means that we do not understand what we say but
we only replicate it. The idea of habit comes from animals which tend to behave on the basis of what the
grownups do. A script is a sort of generalization of certain behaviour; therefore, it is not imitation but a
framework which contains certain routines that are used from a certain stimulus. It has more to do with
memory than imitation. An example of imitation is walking, we do not walk because we have two legs but
because we saw everyone do that. In fact, people brough up by monkeys walk like monkeys and speak like
monkeys. It is difficult for them to learn how to speak or how to eat properly because it is an external
imposition.
Hence, there’s something which is correct in Bloomsbury’s theory.

Linguistic relativism
It has to deal with Edward Sapir who thought that language is a mental reality and not a product.
Therefore, the Sapir-Whorf Hypothesis is that language (syntax, morphology, word-structure) influences
non-linguistic activity and consequently the person’s view of reality. The more articulate the language the
more complex the concepts explained are. It is opposite to Structuralism.

Prescriptivism vs descriptivism
Descriptivism is the description of how language works or might work. Then there’s one theory that instead
of being focused on patterns we tend to follow, is focused on rules like prescriptivism, but in any kind of
grammar or linguistic book, even the Universal Grammar is considered to be descriptive. That’s because
descriptive approaches focus on the performance, the usage, the majority of the descriptive approach
except for Chomsky. Chomsky is concerned with competence, the abstract mechanism (or rules) on the
basis of which we intuitively use language. This is what makes Universal Grammar a descriptive approach
rather than a prescriptive approach. In the prescriptive approach rules are not used intuitively because
someone told you what is right and what is wrong. In Chomsky’s approach you follow certain rules because
they are there; what he says is that our brain is pre-programmed to language, therefore our brain has some
stored rules, but you don’t know they are rules. According to him, we tend to use language in a natural way
by respecting those rules which are innate rules.
Because of the focus on rules this appears to be a descriptive approach but since the rules are internal
rather than imposed by someone else, this is a description of how language function rather than a
prescription.

Generative grammar
The main assumption is that language functions on an innate system of rules (grammar, predisposition) and
the focus is on the structure which are allowed, that your brain considers acceptable, or are discarded
(because the brain considers them as inacceptable). It focuses on sentence units or parts like phrases and
their combination of clauses. According to Chomsky “Grammar is autonomous and independent of
meaning”, therefore you could come up with perfectly grammatical phrases, but which have no sense.

Chomsky reviewed the theory by Bloomfield and came up with this system from the contrary of his theory.
For example, “Colourless green ideas sleep furiously”. The sentence does not lack of meaning because it
depends on the context if it can be evocative or not. It is certainly not informative but can be considered
like some kind of poetry and therefore evocative.

Grammar is a key word of function because the function of the parts and the time in which the actions took
place do not depend on the meaning. Grammar is not entirely independent from meaning because it has
some function when it comes to it. Using grammar to disambiguate sentences is something we do not do;

14
we only do that when there are very ambiguous phrases. This process is used by our brain when we use to
find the meaning of a sentence through grammar.

The decline of generative transformational grammar was due to the focus on sentence in isolation.
Moreover, it didn’t intend to analyse performance, but the competence and it was a ‘formally’ valid theory
but far from a psychological reality: if the Standard theory were psychologically realistic it would entail
processing complexity: a sentence articulated through no or few transformations should be decoded faster
than one with many transformations which is hardly the case.
Syntax is not just independent, it is a structure that may help us to understand the meaning of the
sentence, for example in “The author wrote the novel was likely to be a best-seller” seems to have
something missing, but in reality “that” between wrote and the is omitted according to English rules.
Language is based on semantics rather than syntax.

According to the Generative grammar, we have an innate system of rules which resides in an area of our
brain. Knowing how the brain works, now we know that there’s no such place in our brain where there’s
the core. The first problem is that is that the Generative grammar is unrelated to meaning. The second
problem is that logical complexity in the core sentences, according to Chomsky, are constructed in linear
ways (I gave a pen to Mary) which ten gets transformed when it gets to the surface (I gave Mary a pen).
According to the theory, this second sentence should be something which should be more elaborately than
the first one.
A noun phrase is a complex umbrella term which includes everything relevant to a certain noun. On the
other hand, a verb phrase includes everything related to the verb. This kind of terminology comes from the
Generative grammar.

Systemic functional linguistics


Another very relative approach to the language is systemic functional linguistics by Michael Halliday, who
managed to combine different ways of thinking about the language, not only in terms of regularities and
rules, but also according to function. He made a systemic function of linguistic. It is relevant to study this
kind of approach for two purposes: to teach language for professor to have a different grasp of language
used in different contexts, and not only to understand how to correctly use the language.

The main assumptions of this approach are: the focus on appropriateness (it can be measured only in
respect of a given context), which is the sum of purpose and context (effectiveness), the focus on text (not
on the clause: noun phrase, verb phrase, etc., because according to Haliday when we communicate what
we say, the process is not only related to clauses, but also how they are interconnected), not only clause
but resources and cohesive devices to link parts (reference, substitutions, ellipsis, conjunction, lexical
cohesion, pauses, etc.).

In the first edition of the Grammar in 1985, it is said that there are two reasons why you use the language:
for a transaction function (in order for things to be carried out) or for social reasons (when you want to
establish a relation with your locutor). If you start analysing the language with that in mind, there are
plenty of rules to accommodate any possible intent of communication.

In the clause three different meta-functions are combined:


 Experimental / ideational / representational: the expression of content, ideas, our interpretation of
the world (FIELD)
 Interpersonal: the relationship between participants and attitude towards the content (TENOR,
polite, formal, informal, assertive, etc.). You present yourself to the interlocutor and you establish a

15
relation with him. Moreover, you convey the way you think about what happened as your way you
interpret things and not in an objective way. You present an element of commitment.
 Textual: how language is used to organize the text (MODE, written/oral, more concise, clear,
convoluted, etc.). It has to deal with the conventions through which we tend to organize certain
meanings. Redundancy is sometimes considered problematic, but it is a matter of convention, and
the same applies to linearity.

According to the theory, there are three different taxonomies according to which we can analize sentences.
If we analyse it according to an Experimental function, we have transitivity (to represent processes and the
participants), through categories like:
 Agent/actor: carrying out the action
 Recipient: receiving the ‘goods’ or ‘information’ (complemento di termine)
 Affected / goal: the ‘goods’ or information produced / transferred through the action/ process
 Process (action)
 Attribute
 Circumstance: locative, temporal, conditional, concessive, causal, resultant.
Example: Bill will give me the bill tomorrow. Bill is the agent, will give is the action, me is the recipient, the
bill is the affected and tomorrow the circumstance.

The interpersonal function, which operates according to the mood (clause type – indicative [declarative vs.
interrogative], imperative; polarity [positive vs. negative]):
 Subject: the doer, before the verb
 Predicator: carrying the meaning of the verb
 Finite operator: indicate time reference through tense, attitude through modality, interrogation
through inversion, etc.
 Complement: object (direct or indirect)
 Adjunct / adverbial (time, place, etc.). They are called adverbial because in theory they can be
substituted by adverbs.
Example: Bill will give me the bill tomorrow. Bill is the subject, will is the finite operator, give is the
predicator, me is the indirect object, the bill is the direct object and tomorrow is the adjunct.

The textual function operates according to information structure, which is what information is placed first
in the clause. The terms used in this taxonomy comes from the School of Prague.
 Theme is about the initial informative elements, information setting a specific scenario. The
information in the theme is something which is already known by the interlocutor.
 Rheme is the new information which complete the meaning of the theme.
Example: Bill will give me the bill tomorrow. Bill is the theme while will give me the bill tomorrow is the
rheme.

The possible throwback of this theory is that the three taxonomies, instead of being clarifying, they may
end up being kind of confusing. The problem is that we have plenty of terms to define the subject and
having so many possibilities is kind of confusing.

Performance models
Sociolinguistics (by Labov) and discourse analysis (Harris and Stubbs) focus on the interpersonal function
because they study language in practical context, when people talk and do so on the basis of a specific role
they are playing, and there are some roles that require some specific register and other things. According
to these models’ language is a tool of persuasion.
16
Functional approaches
Emergent grammar tries to revise what Chomsky said because we are not pre-programmed to language,
but we became aware of the language while we talk, by using the language we tend to resort to specific
forms that we can considered normal or grammar. Sinclair introduced the pattern grammar saying that
when we speak, we don’t follow a series of rules that are in our mind, but we tend to resort to given
patterns or chance of language that are available to us and use them. Language is a contextual product,
produced in communication through collection of formulaic / routine constructions or frequently occurring
grammatical patterns drawn from previous experiences in similar contexts.

According to Bloomfield we tend to speak as a reaction to a given stimulus which is a reaction overlearned
in the meaning that we tend to replicate in a similar way a reaction we saw in a given context.

The fact that there are different approaches affects the way in which in general English language has been
approached and analysed. Therefore, there are different angles and each of those comes with its own
taxonomy and if you don’t know to which type of model that taxonomy is related, it could be somehow
confusing.
The type of approach appears to be based on the fact that everything more or less goes in English; this is
based on the fact that the language is based on the experience rather than rules. It is important to consider
that Google is used as a source of information even by scholars who would use it to form corpora in order
to find which are the patterns more recurrent in the language. The problem is that not everyone that uses
English in Google, they may not be native speaker.

Morphology
Morphology is the study of the language and how words are produced. Morphology can be considered by a
part of lexicology. Morphology would give you some elements through which understand why certain
words can carry certain meanings and why some other words never encountered can be recognised with
certain meanings.

Lexicology is the study of simple words and the complex compound words as meaningful or self-sufficient
units of the language in terms of:
 Form and structure: on the mechanic of the production of these words (morphology)
 Meaning: it refers to the information derived from semantics
 Etymology: it has to do with the origin of words (etymology explains the spelling of words and why
they are written in that way, but not why they change)
 With little or no support by lexicography which has to do with the compilation of dictionaries (it
need to be as clear as possible for users, which is way it needs to be considered lexicography).

Morphology is the study of morphemes, smallest meaningful units, and their arrangement in forming
words. Smallest means that they cannot be broken down on the basis of meaning, while meaningful means
they point to a referent in a non-verbal world. The shortest morpheme is -s for the plural, for example.
For example, childminders is formed by child- [stem] + -mind- [stem-action] + -er [performer].
Morpheme can be free id they occur alone as individual words (child, mind, sleep) or bound if they are only
combined to other morphemes to form a word (- er, -s, -less, -ness).

Semantic, instead, is the study of meaning. Meaning is a very complex notion so much so that even the
definition of semantics should be more articulated; there are:
 Lexical semantics that has to do with meaning of words and word relations

17
 Sentence semantics which has to do with the meaning of/ between sentences
 Pragmatic semantics which has to do with the meaning of utterances in context (no with the
meaning of right, for example).
There are different approaches to meaning which can be in terms of:
 Intelligibility: form that can easily be processed
 comprehensibility: the meaning can be processed, due to:
o meaningfulness [literal or surface meaning]
o acceptability [effective meaning, that is the literal nonsense, but sense worked out in
context]

There can be utterances meaningless but acceptable:


 metaphors “that work is a cup of tea”, hyperboles, figure of speech, irony, sarcasm “that woman is
a man”.
 slips of the tongue, typographical errors, mispronunciation.
There are also utterances meaningful but unacceptable sentences (assertions diverging from our
knowledge and/or experience, i.e crocodiles can fly, the basket ate the vegetables – they may become
acceptable when used to exemplify / represent the inconsistency, contradiction of a logical argument).

Etymology is the study of the origin and history of words problems; it is a way of finding a logic behind all
this arbitrariness. There is a gap between etymology and meaning, but it can be used to find the ‘real’
original meaning and to give an explanation mainly by analogy. The folk etymology is applying
morphologically transparent mechanisms used to explain obscure forms.
Phonology is the study of the sound of words. It is relevant for distinguishing pairs like pill vs. bill, sheep vs.
ship, meat vs. meal and to distinguish words on the basis of their stress (ex’port [verb] vs. ‘export [noun]).

Syntax is the part of lexicology which is the closes to our idea of grammar. In fact, it has to deal with the
mechanisms that combine words. It is the study of rules and regularities between classes of words. It is the
closes idea to grammar because it describes which words, we combine in order to find a meaning. Syntax is
relevant to detect if forms are grammatically well-formed or ill-formed. Those concepts have to deal with
what is close or far from the standard, what you are accustomed to or not. If your brain can process it, it is
well formed. Syntax is about the relationship existing between words. Therefore, syntax is something that
regulates how different parts should be related.

Morphology is the study of words, which is an uninterruptible unit (only affixation at either end) that can be
formed by one or more morphemes: simple words (one morpheme), complex words (free + bound
morphemes, happily speaker), and compounds (free morphemes, birthday, coat-hanger).
Words occur in phrases and belong to a specific word class or part of speech:
 lexical words (open class): with independent meaning (nouns, verbs, adjectives, adverbs)
 grammatical words (close class): no independent meaning (but modifying meaning relations,
propositions, articles, conjunctions).
Lexical words are considered as open class because they are created every day. On the other hand, function
words (grammatical word) are always the same because everything is already mapped; we could imagine
something new, but it would be unnecessary and linked to a previous element. For example, in English the
dual system was lost because it was not relevant for communication (this seems a further link between
language and cognition).

Morphology is considered to be part of grammar and not lexicology (except our book) and it is considered a
part of grammar that studies inflection and word-formation. Morphology does indeed account for

18
inflection, which has to do with conjugation and declension, but it also deals with word-formation problems
which is composed by derivation and compounding.
Derivation is something that is linguistically realised in two ways:
 Concatenation which means through affixation, that is the adjunction of affixes that can lead to
prefixation, infixation (this has only to do with oral language or the transcription of oral language) or
suffixation
 Non-concatenative derivation, that is to say those type of derivation that you produce without adding
affixes. Words with recognisable meaning are produced without using affixation in order to modify the
meaning of the original word. Conversion, truncation, and blending are part of this morphological
process.

Scheme:
 Inflection
 Word-formation:
o Derivation:
 Affixation (concatenation)
 Prefixation
 [Infixation]
 Suffixation
 Non-affixation (non-concatenative derivation)
 Conversion
 Truncation
 Blending
o Compounding

Inflection is used to produce new forms of the very same word. The word is always the same, but you
modify in some way the meaning of that form or root. For example, the verb employ can be inflected in
employing or employed or employs. The new words are formed with the use of encoding grammatical
categories: plural, person, tense, case (‘s genitive).
Inflection, since it modifies the form of the world, do not change the word class. Inflection is only realised
through suffixation, which is things you add at the end of the word. Inflection can modify derivational forms
(decolonising, underemployed).
Inflection, precisely because it does not produce any word, but it modifies it according to some
mechanisms, it is always transparent and productive, which means that you will recognise a new word
through the process that is always used.

Derivation is the type of mechanism that produce new words or meanings from a recognisable root
(employ -> employer, employee, employment, unemployed, underemployed, etc.). Derivation usually
changes also the grammatical category of the word class; therefore, we can have nouns out of verbs,
adjectives out of nouns, etc. Derivation can be realized both by prefixation and suffixation.
When you apply derivation, the risk of coming up with words that are not 100% transparent is possible. For
example, words can be derived by combining two morphemes whose meaning is clear to you, but the
meaning of the derived word does not correspond of the meaning of the two parts, for example, realize.
Another possible instance of ambiguity is that you might end up with a derived word that can have two

19
different meaning depending on which part of the word is considered more dominant; for example, curious
can be used in the meaning of someone very interested in something or that that person is interesting.
In terms of productivity, not always derivation is transparent, and it is not always productive, that is that
there are some restrictions (semantics, phonological and morphological):
 The adjectival suffix -ive can be used in exhaust but not to walk
 The adjectival suffix -al can be applied to colony but not to computer.

There are some constraints on productivity especially in terms of derivation which are:
 Phonological: it would be difficult pronouncing some sounds instead of other sounds. They are
related to syllable stress (for example you can put -al only after a stressed syllable) and to
segmental restriction, which is relevant to sound (-en can be attached to monosyllabic obstruent
like black).
 Morphological restriction has to deal with the morphological nature of the words (when you have
an adjective that is derived by the suffix -ize and you want to turn it into a noun, you need to put -
ation). In synthesis some words presuppose another specific type of affix that can be combined
with the one already present.
 Semantic restrictions: some affixes are usually bound to a specific syntactic category (un- to
adjectives, -able to verbs)
 There are semantic restrictions, for example, -ee is usually referred to animated or sentient entities,
-ify and -ize are always meant to produce transitive verbs, which means that a direct object is
expected.

Non-concatenative processes are the processes that do not use affixation and they are:
 Conversion (when the comment on the language from the part of the speaker is not very good), zero-
suffixation, transposition: from verb to noun (walk – have a walk, go, have to go, bite – have a bite) or
noun to verb (water, motion, chair). There processes are kind of transparent, but they are not
frequently found. If they are used, the author wants to use something that is marked.
 Truncation, deletion, clippings: delete parts of the word which do not necessarily correspond to a
morpheme (demonstration -> demo; discotheque -> disco, laboratory -> lab). Truncation can be used
for reasons of economy because when you say lab there are not other words that can lead to
misunderstandings.
 Blending is the amalgamation of parts (not necessarily morphemes) of different words. For example,
smog -> smoke + fog, modem -> modulation + demodulation, velcro -> velour + crochet.
 Acronyms: based on orthography like NATO, UNESCO, UK, USA, etc.

Morphology relates to the operation that the user follows in order to produce new words rather than
create knew words. The difference between production and derivation is that the first one is when you
come up with a new word and the process are easily recognisable by your interlocutor while derivation is
not necessarily transparent.
The most obviously productive affixes have:
 Formal regularity, which means that they are easily attachable (with no transformation), for example, -
th, -ness
 Semantically regularity which means that the modification is done in uniform and consistent ways (-
ness = abstract noun, -ly = in an X fashion)

STUDY COMPOUNDS

20
The phenomenon of blocking consists in recognising in which context it is better not to form a new word.
Existing forms block the production of new forms, for example, homonymy blocking, which is when there is
a phonologically identical form (but different meaning) to a deriver one (principle of ambiguity avoidance).
Liver as a reference to a human being who lives is not used because liver is already a word that sounds
exactly the same and refers to an inner organ.
Synonymy blocking is token blocking in the case of:
 Two possible words that are synonymy, if we already have existing word with exactly the same
meaning, on the basis of the system of economy, we do not create a new word.
 If the already existing word is used in great frequency and recognisable, the new word is not created.

Type-blocking is a different type of blocking that applies when there are:


 Morphological regularity, as it would be taken into consideration (gratuity vs *gratuition). If we
already have a noun word with a specific meaning, we do not need to come up with another word
with a less specific meaning.
 Special affixes block more general affix (i.e redundancy > redundantness). There is also one possible
case when you might want to disrespect this kind of principal and this is usually about a specialised
domain or specific domain by which someone could refer to a more general meaning that already
exist in the language.

The third phenomenon is compounding and, according to the general definition, compounds are those
formulation which combine more than one word. It does not combine two words, but it combines several
binary sequences like committee member (two words), award committee member (word + binary
subelement).
The typical feature of compound is that you can add. That is called recursivity, which is the typical feature
of compounds where you can add at the beginning of at the and without the kind of constraints that you
would normally have (except semantic one).

The first element of the compound either a bound root (astrophysics, biochemistry, which are also called
neoclassical compounds because the first part of the word come from classical languages and they can’t be
find on their own as words), a word (park commissioner, deep-fry) or a phrase (off-the-rack dress, bed-and-
breakfast men), while the second element is a root or a word.
The technical words of these two parts are:
 Modifier (left-hand element) that is everything before the compound
 Head (right-hand element) that confers the relevant semantic and syntactic information. It confers
the class to the whole compound, which means that the feature percolate in the whole compound
(fry is a verb, deep-fry is a verb). Percolate means to be attributed.

We can have compounds by combining different words: noun and noun (film society moonlight), noun and
verb (brainwash), noun and adjective (stone-deaf), verb and noun (pickpocket), verb and verb (drive-
bomb), adjective and verb (greenhouse), adjective and verb (blindfold), and adjective and adjective (light-
green).

There are three types of compounds:


 Endocentric compounds: they are the most common and they are the compound where the
grammatical head is also the semantic head, which is the element that carries the meaning of the
whole compound (the head is both the grammatical and the semantic head). An example could be
book cover.

21
 Exocentric compounds: they are the cases in which the semantic head (the element semantically
speaking more relevant) is not lexicalised. The grammatical head of the compound is not the
semantic head of the compound. For example, the pickpocket is not a type of pocket, but it is a
type of thief. Usually in this type of compounds the head ‘possesses’ the feature expressed by the
compound.
 Copulative compounds: no element is semantically prominent (as head). There are two types of this
compounds:
o Appositional compounds: the referent is an entity characterized by both elements (singer
songwriter).
o Coordinative compounds: denotes two entities in a particular relationship towards the
head (the doctor-patient relationship).

ENGLISH LANGUAGE TEACHING

Once we are approaching something from a speculative point of view, we can have a more coherent design
or outline of this phenomenon. When it comes to learn how to use this tool, things get much more
complicated because there are plenty of ways to make this tool work. For example, if you want to teach
how to use a drill, there are plenty of ways to look at it or aspect that need to be taken into consideration.
That’s the same thing with the teaching of the English language. The idea or English and of language is
incredibly general, therefore, the concept of teaching is complex as well. In fact, each country has its own
ways of teaching. The debate on teaching has been developed much more in the American continent rather
than the British one.

There are different types of TESOL (teaching English to speakers o other languages): General English (A1-C2)
which includes Standard (British) English (Standard Southern English, SSE) or General American (GA), both
norm-providing (language spoken by native speakers of the language that uses only that language and
therefore it is not affected by second of foreign languages). A1 level is when you can describe things and
people that are present, A2 you can describe kind of ideas, things that are not necessarily present, B1 is
when you manage to describe phenomena or experiences your interlocutor do not see, itis relevant to the
description or explanation, while B2 require to substantiate your position, you take into consideration
alternative pov in order to substantiate your pov on the basis of other point of views. C1 and C2 are
relevant to the effectiveness related to a specific content; in C1 you know the right register to use, while in
C2 you are able to switch genre and register in order to make what you want to say.
The other type is EIL (norm-dependent) and ELF (contact language).

English can also be taught on the basis of perspective:


 English as a Second Language (ESL): focuses on pragmatics, performance, practical needs
 English as a Foreign language (EFL): focuses on regularities, form, competence.

English can be taught on the basis of purposes:


 English for Specific Purposes (ESP)
 Business English (BE)

22
 English for Academic Purposes (EAP)
 English for Science and Technology (EST)
 Etc.

Another possible differentiation is based on orientations:


 General Englis: the discussion is on the form
 Content-based language teaching (CBLT) or Content and language integrated learning (CLIL) which is
when a certain topic is explained in a foreign language.

CBLT and CLIL focus on content, however there are some differences between the two approaches.
CBLT (US) is the teaching of academic subjects through the second language, and it can be done in a
context-driven way (contents are more important than language, by focusing on the contents you learn
something about the language) or in a language driven way (language is more important than contents as
they are considered instantaneous). CBLT is before CLIL and, therefore, it is much stricter than CLIL and it is
American.
CLIL (Content and Language Integrated Learning) is European and both contents and specific language are
important to understand or express them (including content-obligatory language; little metalanguage, you
can have explanation in your native language, but they need to be limited). What is important are:
 Communication: content transfer and understanding
 Cognition: developing thinking skills, both higher order thinking skills (why? How?) and lower order
thinking skills (what? How many?). The first ones are related to argumentation while the second ones
to description.
 Culture: understanding relation between content and culture. You also understand how a specific
content is considered and what place it takes in that culture.

There are some possible problems in CLIL as it can be done as soft CLIL, which means that the teacher can
use the main language to explain what it not clear, or as hard CLIL, which is a pure language lesson as if the
teacher did not know the first language. There can be problems:
 The teachers need to be competent both in content teaching and in language teaching (trained in both
disciplines)
 A teacher that combines or sequence content (preparing the ground with students) and a language
teacher that transfer or ‘translate’ the contents.
 The second language can be a barrier to the transmission and learning of the content

Who teaches English?


Are native speakers best teachers? Native speakerism was the ideology by which native speakers embody
the “ideals both of the English language and English language teaching methodology”. It was an ideology
very active during the Seventies and Eighties. If you are a native speaker, you are competent in the
language, however, it does not mean that you are competent in teaching.
NESTs (Native English Speaker Teacher) had an appeal of authoritativeness as they were more fluent, and
they had a better linguistic confidence. However, some possible problems are that they had little or no
access to L1 understanding and as a native speaker the competence of the language it is based more on
intuition than cognition.
On the other hand, non-NESTs people possess the metalanguage in the sense that they approach the
foreign language in metalanguage terms (third person, indicative, etc.). Moreover, they can use L1 for
explanation, comparison, etc. And, at last, they went the very same experience as their L2 students, which
is relevant for understanding the problems.
Hence, the perfect teacher is bilingual because he knows perfectly both languages.

23
When it comes to get someone to be able to use the language there are two different views, opinions or
ideologies: language learning and language acquisition. Language learning is mediated, you have to have
someone that has the role of a teacher, that uses something that you recognise as rules and norms, and
you learn through them. On the other hand, learning acquisition is sharing experience with people using
the language for a variety of reason and since there are regularities in the language, you assimilate them.
Language learning is something you get into schools, while language acquisition is what goes on in the
native language when you start to understand how to use it.

One of them usually has the cognitive predominance over the other. There are some ways of teaching in
which the idea of language acquisition is much more important than language learning. This difference is
based upon the idea of Chomsky that there’s something in your brain (language acquisition device) that is
what on the basis of given stimuli would know how to construct or respond linguistically and to combine
element to come up with a structure that makes sense. Krashen built his hypothesis on this idea. He said
that now that we know that there’s something in our brain that responds to stimuli, we can teach
methodology based on that. Methodology can be based on input approach: language can be acquired
through inputs used in spontaneous conversations. According to Krashen, the fact of noticing specific things
has to do with comprehensible things among which there is something new (in terms of content or
structure).
Krashen said that the best possible way to make student handle those things is to have an environment
view low affective filter, which means a relaxing setting (they notice that there’s something new and they
understand or assimilate that idiosyncrasy in a way as natural as possible rather than in an artificial way as
having the professor explaining it to you).
Moreover, the text should be as roughly turned as possible, which means that the student should not see
that it is a text made upon the purpose of making them notice that there’s something new. This roughly
tuned input goes beyond the language notions being dealt with.

According to Krashen, this dimension can also be connected by the other dimension, which is the language
learning (the more abstract way of presenting things) which is going to enlarge your gaze with the abstract
knowledge of the language. The abstract knowledge of the language on itself is not bad, however, it can
hinder communication because it acts as a monitor and control spontaneous communication (it is not a
communicative facilitator). A finely-tuned input is when a text has been cleaned out of every new aspect
except one in order to make student discover only that thing. The problem of this input is that it is kind of
artificial.

Language learning is based upon the idea of explicit knowledge, which is recognising things because they
are pointed at, explain the interlocutor how to use them and make him use them in the way he uses the
language. This way of teaching the language which is based on explicit knowledge can be divided into:
 Focus on form: it is a way of teaching the language which present text that you have selected for
specific reasons, but you let the students notice peculiarities of the language (one or more) and on the
basis of them you can organise a class or a series of classes on the basis of that.
 Focus on forms: you prepare in sequence a series of classes which is based upon forms or series of
forms. Syllabus are designed for this. It is a systematic organisation that is used to underline
peculiarities of the language. In this case there may be a text more fine-tuned.

Various approaches or methods – an overview


Approach is the principle which is at the basis of learning or teaching, while methods are the procedure and
the techniques (how to learn or teach).

24
 Grammar-translation is the oldest approach which involves rules and translation from one language to
another. It is the approach still used in Latin because it is a dead language. The knowledge of the rule is
considered sufficient to make you know how to use the language (deduction). It is a prescriptive
approach
 Direct method is based upon observation, which means that in the class you explain the mechanisms of
how a given language function. An error is something wrong in competence, while a mistake is
something wrong in the selection of words.
 Audiolingualism (1930) is a stimulus-response-reinforcement. The language is considered as a habit
(automatization). A way used to reinforce the linguistic performance was based on drills, memorization,
repetition, and coral responses. It is very much mechanical, but not necessarily wrong because
repetition would make certain things more accessible to the learning.
 Communicative approach or methos (1970) is based upon the language which is used to transmit
meaning (spoken language more considered than the written one, and production is prioritised over
repetition). It is somehow the opposite of Grammar-translation approach. The language focus on the
result of communication (role-play).
 Lexical approach considers language mainly as vocabulary, therefore, it is more important than
grammar (1990). That’s because vocabulary covers the majority of the language, therefore we should
start from it.

There are also communicative approaches:


 Community language learning: getting students to communicate and the teacher/knower provides
translation; records and plays back student’s production
 Suggestopaedia (mid. 1970): relaxed setting, low affective filter (no noises; music; the teacher
should be more of a parent; new names; no traumatic subjects; comprehension of pre-studied
dialogues in such unintimidating setting)
 Total Physiscal Response (mid 1970): giving directions/instructions to be followed (teacher to
students); students give indication only when they are/feel ready.
 Silent way (1970): teacher says as little as possible (with the use of flashcards, charts, etc., points,
directs, nods, etc.). Learners discover and create language instead of remembering and repeating
what they have learnt.
 Dogme ELT (or teacher ‘unplugged’): conversation-driven (no planning), materials-light (responding
to needs/interests). Emergent language (no syllabus, based on students0 competence in order to
help them find the best way to say what they want)
 Task-based leaning: pre-task stage (introduction topic and task), task-cycle stage (pair activity:
students plan the task [language to carry out the activity] and how to report it to the class
[language to talk about the activity]). The language focus on stage (students discuss language of the
original texts used for the task).
 Lexical approach: focused on ‘prefabricated chunks’ (i.e lexical phrases, collocations, idioms, fisex
or semi-fixed expressions, formulaic expressions as to teach to future I’ll be in touch).

Procedures
PPP procedure (mid. 1960s):
 Presentation: introducing contents, structures, more or less pre-task in order to show the possible
ways around it
 Practice: repetition of sentences, familiarization with structures to be then used
 Production (more or less task cycle)
ESA procedure (2000s):
 Engage: emotionally and cognitively stimulating students (explain why this task may come in hand)
25
 Study: metalanguage, teaching and learning language mechanism
 Activate: using language to communicate (production)
Post method: it is based on something that had been overlooked in the past: students’ need. The syllabus is
in general a regulating principle which is the guiding light when you need to organise a series of activities.
For example, if someone is good at writing but not in speaking, you organise the class according to his
needs.
Student needs are always taken for granted. The main problem od this approach is that middle class
students are obliged to go to school, therefore maybe they don’t have needs as they are not interested.
Moreover, they might have some curiosities but not needs or have needs that are too different in order to
prepare a coherent course.

26

You might also like