3.3 Week 3 Lecture Part 1

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 2

Lancaster University | Week 3 Lecture: Part 1

This week, we're going to start to focus in on examples and focusing round examples seeing how
corpus linguistics is used. And this week, we're going to look at a project which we undertook for the

Economic and Social Research Council in the UK, ESRC, on how British newspapers talk about
refugees and asylum seekers.

So that will be the key focus for this week. To some extent, of course, because of this course, the focus

will be methodological. But I also want to show, very clearly, the relevance of corpus linguistics to

discourse analysis and so-called critical discourse analysis. That's important. And in doing so, I'll give

you some examples of techniques and findings.

But first of all, let's recap. Let's remind ourselves about the characteristics of the corpus based

approach-- large amounts of data. The use and manipulation of frequency data. The hunt for co-
occurrences and patterns as a way of studying text and exploring meanings. Annotation and grouping--

deciding that, for example, you can see a set of words which have a similar meaning.

Quantification-- the accurate identification of how many and were, if you like. And also, where

appropriate, measures of statistical significance are employed so that we can test what we're seeing
and make more concrete claims as a consequence.

The merits of corpus based analysis, well it helps us recognise the big picture. Rather than looking at
one or two texts, we can look at hundreds of millions of words, tens, hundreds, of thousands of texts. It
can also be very helpful for gisting these large texts collections-- finding out the aboutness. Finding out
about those texts.

But of course, corpus linguistics does not eschew, ignore, repel, qualitative analysis. Where we find
something of interest, we can go in, carry out concordance analysis, for example. Look at the individual
examples in context in a routine way, simply by concordancing.

And through that, we can examine a large number of instances of specific patterns. And as I've said,

concordancing is very useful for this. But also, there can be triangulation in here. We can use a range of
methods. We can work quantitatively. We can work qualitatively. And we can check our intuitions. We
can bring many instruments to bear on the problem that we're interested in order to explore it more

1
fully.

This week, we'll be using corpus based concepts and tools that you've been introduced to already. Let's

recap. We'll be looking at key words and key clusters, for example. Key clusters in this sense simply
being key words but where you're looking at two or three words, and how frequently they occur, and
whether they occur more frequently in one text collection as opposed to another.

We will also be looking at collocation-- the co-occurrence of two words within some specified span.
Again, both the keywords and collocation, we can use statistics in order to help us understand what

we're seeing, and, in the case of keywords, make claims about the significance of what we're seeing.
We can also use collocation as a lens to begin to observe semantic prosody and discourse prosody. So
that's what we'll be doing in this series of lectures.

You might also like