Thinking by Numbers Cultural Analysis and The Use of Data The SAGE Handbook of Cultural Analysis

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 20

Thinking by Numbers: Cultural Analysis and the Use of Data : The SAGE Handbook of Cultural Analysis 15/10/11 15:48

< More from SAGE Reference Online

The SAGE Handbook of Cultural


Analysis

Thinking by Numbers: Cultural Analysis and the Use of Data

Justin Lewis

THE CULTURAL STUDIES CRITIQUE OF QUANTITATIVE RESEARCH


WHEN THINGS DON'T ADD UP
CONTRADICTORY OR JUST DIFFERENT? THE MEANING OF EVIDENCE ABOUT MEDIA INFLUENCE
EMPIRICISM AND METHOD: UNDERSTANDING QUANTITATIVE FORMS OF DATA PRODUCTION
CLASS, UNITY AND CULTURAL CONSUMPTION
MEDIA POWER REVISITED: PROOF, PLAUSIBILITY AND THE MEDIA SPHERE
THINKING BY NUMBERS 1: CONTENT ANALYSIS
THINKING BY NUMBERS 2: SURVEYS
GO FORTH AND QUANTIFY
NOTES
FURTHER READINGS
ENTRY CITATION

There will be readers for whom the very title of this chapter will be off-putting: many will regard the prospect of
reading about numbers and methods as, at best, tiresome, and at worst, an irrelevance. They will be tempted to
move on to something more familiar. But my concerns in this chapter are less with the technicalities of
quantification than with the broader questions of the purpose and method of cultural and media studies. In many
areas of cultural enquiry, quantitative data have often been the elephant in the room - superficially ignored, but
undeniably present.

And yet the conceptual tools informing most forms of cultural analysis - such as race, gender, class and popular
culture - derive their meaning and import partly from statistical forms of analysis. One could say that even the
most qualitative form of analysis - a piece of film criticism for example - is never more than a few degrees of
separation from an assumption based on numbers. We can see, across the field, both a creeping acceptance of this
as well as a residual resistance. Either way, both the embrace and the rejection of statistical forms of analysis have
cultural origins and effects. What follows then is not merely an acknowledgement but a consideration of the
elephant in the room.

THE CULTURAL STUDIES CRITIQUE OF QUANTITATIVE RESEARCH


There are, broadly speaking, two different kinds of critique of quantitative research that have emerged from within
and around cultural studies. There are, first of all, political critiques, based on the way statistics can be used to
bolster certain dominant ideas or forms of power. Even an apparently transparent, simple figure, such as an
election result, can be seen to play an ideological role. So while elections are generally presented as having a
‘natural’ meaning, in that they express the will of the people, they might also be seen as rituals that legitimate a
narrow set of political options - especially in countries such as the USA, where the political and cultural economy of
elections means that it is difficult for candidates to win without the support of business interests (Edelman, 1964,

http://www.sage-ereference.com/view/hdbk_culturanalysis/n31.xml Page 1 of 20
Thinking by Numbers: Cultural Analysis and the Use of Data : The SAGE Handbook of Cultural Analysis 15/10/11 15:48

1988; Salmon and Glasser, 1995; Lewis, 2001). In this sense, a simple statistic can conceal the various ways in
which elections are structured and constructed to produce certain outcomes that favour dominant groups. 1

This is very much the gist of Martin Barker and Julian Petley's book, Ill Effects, in which traditional ‘effects’
research into media violence - often associated with the use of statistical or ‘scientific’ methods - is seen as bound
up with reactionary assumptions about morality or censorship (Barker and Petley, 2001). As David Gauntlett puts
it, ‘assumptions within the effects model are characterized by barely concealed conservative ideology’ (Gauntlett,
2001: p. 54). The authors contributing to the collection differ on the degree to which they accept that such
research has other possibilities -Martin Barker, for example, makes it clear that his critique is of specific kinds of
media ‘effects’ research, and uses other forms of quantitative research to make his point (Barker, 2001).

David Gauntlett's more comprehensive attack - one that implicates the methodology itself - goes several steps
further (see also Gauntlett, 1998), and I shall return to it shortly.

The political critique of statistics has been bolstered by the influence of Michel Foucault, whereby the gathering of
statistics can be seen as a form of social surveillance (Foucault, 1977). Statistics thereby become a form of mass
observation, one that allows or leads to power being exerted by the observers over the observed, amounting to
what John Hartley refers to as a ‘technology of control’ (Hartley, 1996: pp. 64–65). Similarly, Ian Hacking's
historical excavation of the notion of probability - an idea that is central to most forms of statistical analysis in the
social sciences - explores some of the ways in which statistical analysis makes possible forms of social regulation
(Hacking, 1990).

Once we wrest the use of statistics from a pure or empiricist scientific realm and thrust it into a cultural domain,
we can see the ways in which it becomes interlaced with forms of power. So, for example, we can observe the
ways in which state or corporate power is extended by market research and other forms of statistical observation,
in ways designed to further the interests of the observers rather than the observed. Thus the purpose of most
market research, while ostensibly measuring people's preferences, is to inform strategies for the profitable
manipulation of those preferences.

But we need to be careful here about where such analysis leads us. If statistics play an important role in stories of
control, dominance or hegemony, that is because they are a tool of storytelling, not because there is anything
intrinsically hegemonic about the use of numbers. So, for example, we can unravel the way in which some of the
conceptual frameworks of classical sociology-notions such as normality, average or deviance - emerge from the
use of statistical calculation (Hacking, 1975). But this does not mean that freedom comes when we dispense with
them. As Bennett et al. argue (1999), statistics are also necessary for more progressive political projects. So, for
example, governments seeking to create effective redistributive mechanisms - such as the allocation of resources
to those with the greatest need - invariably need statistical information to enact them.

There are, secondly, methodological critiques, which emphasize the ways in which statistical or ‘scientific’ models
have failed to grasp the complexity of social life. So, for example, various quantitative approaches in media studies
have been seen as too crude or blunt to tell us about the contextual nature of media content or media influence.
Quantitative researchers, it is argued, will often focus on statistical issues while ignoring more profound
interpretative assumptions. So, for example, Martin Barker takes effects research on media violence to task for
simplistic assumptions about the symbolic nature of media content (Barker, 2001) - a point also made by Stuart
Hall in his famous encoding/decoding essay (Hall, 1980).

This critique can be applied to much of the quantitative academic literature, which could be accused of missing the
big picture in the search for what is deemed to be good statistical practice. Much of the analysis of the media
coverage of polling data, for example, has ignored broader ideological or sociological questions in its concern for
statistical niceties. Thus Reed Welch has criticized the US press for neglecting to give the kind of technical details
that a pollster would regard as standard. ‘The typical article,’ he concludes, ‘does not give information on the
sample size, when the poll was conducted, methods, question wording, and other “minimal essential information”’ -
as defined by the American Association for Public Opinion Research (Welch, 2002: p. 113). In a similar vein,
Stephanie Larson complains that journalists show little understanding of sampling error in reporting polls, and as a

http://www.sage-ereference.com/view/hdbk_culturanalysis/n31.xml Page 2 of 20
Thinking by Numbers: Cultural Analysis and the Use of Data : The SAGE Handbook of Cultural Analysis 15/10/11 15:48

result make inaccurate statements about what the data mean (Larson, 2003).

The problem here is not the desire for standard statistical procedure, but that a focus on issues such as ‘sampling
error’ miss a larger point about the larger ideological and discursive constraints in which polls are produced or
mediated (Lewis et al., 2005). Although the constructed, ideological aspects of polling have been much discussed
(Herbst, 1993; Salmon and Glasser, 1995), it is fair to say that political science often reverts to an implicit belief in
the objectivity of polling data. This is not to dismiss polling, but simply to point out that the process of gathering
and representing statistical data is neither innocent nor neutral (although, in an ironic twist, cultural studies has
largely ignored the whole phenomenon of polling, preferring to see it as bad social science rather than as a form of
representation - Lewis, 2001).

All methodologies are bound up in cultural practices, of course, but cultural studies has paid particular attention to
the tendency of quantitative forms of analysis to reduce the complexity of discourse into uniform units. So, for
example, it is argued that the debate about media violence is complicated by the fact that we cannot assume that
violence on the screen is read literally rather than figuratively (Barker, 2001; Buckingham, 2001; Hall, 1980;
Livingstone, 1998). To assume that all representations of violent acts are equivalent is seen as especially reductive
for a discipline that has devoted time and effort to the complex analysis of texts.

Central to these political and methodological critiques is the understanding that statistical or scientific methods are
forms of ideological or discursive production, with their own premises and assumptions. Quantification depends
upon categories - such as class, race, and so forth - and there will always be an arbitrary element both to those
categories and to the ways we constitute them. The stories they allow us to tell are a product of our attempts to
impose sense upon social reality, to structure it into stories and explanations. There is nothing intrinsic, natural or
inevitable about these structures, they are our impositions.

Lurking behind this insight, however, is a set of cultural conditions that bolster the distrust of statistics within
media and cultural studies, leading to the failure to treat numbers with the kind of intellectual verve reserved for
words and images (Kritzer, 1996). There are some interesting exceptions to this, such as Franco Moretti's
flourishing of statistical data - rather than tracing what he calls the ‘tiny dots in the graph’ (Persuasion, Oliver Twist
etc.) - in his analysis of the history of the novel (Moretti, 2003). But what makes Moretti's analysis surprising -
even invigorating - is partly a lack of statistical training or understanding in the humanities (which many will
privately admit to), which in turn fosters a general suspicion of numerical forms of analysis. In a very practical
sense, for many it is much easier to ignore these forms of analysis - especially when the political and
methodological critiques appear to give license to do so. This creates the conditions for slipping away from an
understanding of the cultural nature of statistical analysis towards a rather more opaque disinterest and hostility.

Either way, cultural studies has tended to favour qualitative research methods - or indeed, sometimes
methodologically imprecise forms of textual analysis. While there can be much to gain from these approaches, I
would argue that this has limited the scope of cultural studies research and created a bias against forms of analysis
- such as political economy, content analysis or surveys based on representative samples -that often require it. In
short, there is nothing intrinsically incompatible about the use of statistical data and a non-empiricist cultural
analysis. So, for example, Pierre Bourdieu's work is highly critical of the way survey data are used in opinion polls,
while nonetheless drawing upon survey data in his analysis of cultural taste (Bourdieu, 1979, 1984).

And although cultural studies research is often qualitative, it nearly always contains quantitative assumptions
(Lewis, 1996). When cultural studies scholars ignore statistical issues, it severely limits the claims they can make,
as well as making them methodologically incoherent. In other words, there is often an assumption within cultural
studies that qualitative research is somehow exempt from methodological constraints. As a result, researchers
have often used qualitative work to argue from the specific to the general, and thereby lapse into untenable claims
and generalizations.

It is, in this sense, useful to consider not only the cultural origins of the use of statistics, but the conditions and
consequences of the rejection of quantification. For just as the use of statistics has a discursive history, so does the
antagonism towards them.

http://www.sage-ereference.com/view/hdbk_culturanalysis/n31.xml Page 3 of 20
Thinking by Numbers: Cultural Analysis and the Use of Data : The SAGE Handbook of Cultural Analysis 15/10/11 15:48

WHEN THINGS DON'T ADD UP


The well-worn cliché about the mendacious use of statistics is appealing to a culture in which numerical forms of
analysis are often poorly understood. For many, there is something almost comforting about the failure of statistics
to paint clear, unambiguous pictures of the world. Witness, for example, the almost ritual moment of confusion
when the British news media report on crime statistics, as in this extract from Sky News:

New crime figures out today suggest that crime could be going up, or down, depending on which set of statistics
you choose to believe. Now one of them is from the British Crime Survey which asks some 10,000 2 adults about
their experiences of crime whether they reported them or not. Now these are the statistics that the Government
likes to use but then there's the actual number of crimes recorded by the police. (Sky News, 1 pm, July 22, 2004)

Or as BBC News 24 more succinctly put it:

Violent crime increased by 12% last year and offences overall went up by 1%. That's according to new figures
recorded by the police. But the British Crime Survey, which the Home Office regards as a more accurate measure,
suggests that both violent crime and crime in general has fallen, continuing a downward trend that started 9 years
ago. (News 24, 1 pm, July 22, 2004)

The apparent contradiction between two sets of figures - one showing crime going up, one showing crime going
down - becomes a question of, as Sky News put it, ‘which set of statistics you choose to believe’. The news media
are then caught between two competing impulses: to highlight the more dramatic police figures - the route chosen
by Sky News - or to perform a classic journalistic balancing act, offering roughly equal space to two competing
interpretations - the route chosen by BBC News 24 (Lewis, Cushion and Thomas, 2005).

If the BBC's approach is the more laudable, it offers its viewers impartiality without clarity. By giving its audience
both sides, it allows them, in theory, to decide which set of figures they trust. But on what basis can such a
decision be made? Without understanding how such a discrepancy is possible, we have to resort to articles of faith
(do we trust the Home Office's view, for example, or do we regard it as self-serving and therefore unreliable?).
What is rarely attempted by broadcasters is any explanation of why the figures appear contradictory. And yet, as
any criminologist will know, this is easily done.

The police figures are based on recorded crime, and are therefore subject to changes in policing methods or public
willingness to report crimes (so that, for example, a police clampdown on a certain kind of crime should lead to an
increase in reported figures). The British Crime Survey uses a large sample (40,000) to find out about people's
experience of crime, regardless of whether those crimes have been reported. In short, they are measuring two
quite different phenomena. This allows, for example, Tricia Dodd and her colleagues to suggest that:

Police statistics provide a good measure of trends in well-reported crimes, and are an important indicator of police
workload. They can also be used for local crime pattern analysis. For the offences it covers, and the victims within
its scope, the BCS gives a more complete estimate of crime in England and Wales since it covers both unreported
and unrecorded crime and provides more reliable data on trends. (Dodd et al., 2004: p. 12)

Put in this context, we can see that the two sets of figures are not contradictory at all, and that the increase in
reported violence reflects a decreasing tolerance for domestic violence (one of the traditionally most under-
reported types of violent crime) - not least by the police. At a time when BCS figures which show violent crime
going down, it could be argued that the rise in reported violent crime is less a cause for concern than for
celebration, since they suggest that crimes such as domestic violence are being taken more seriously.

The general absence of this kind of interpretation in media discourse is partly a reflection of journalism's
preference for dramatic or adversarial forms of explanation, and for comment rather than analysis (Lewis, Cushion
and Thomas, 2005). But my reason for using this example is that it also speaks to two broader, countervailing
cultural tendencies -a simultaneous acceptance and dismissal of quantitative data.

On the one hand, it implies a kind of straightforward empiricism, whereby statistical facts are taken at face value,
regardless of how they may be collected. In this instance, journalistic shorthand requires that methodological

http://www.sage-ereference.com/view/hdbk_culturanalysis/n31.xml Page 4 of 20
Thinking by Numbers: Cultural Analysis and the Use of Data : The SAGE Handbook of Cultural Analysis 15/10/11 15:48

differences between police statistics and the British Crime Survey are erased - an erasure that renders both sets of
figures broadly equivalent, and hence contradictory.

At the same time, the ‘failures’ generated by this empiricist view - amidst a world of contradictory statistics -
create moments of disillusionment with the whole enterprise. Rather than examining the conditions under which
the statistics have been produced - and thereby accepting the contingent specificity of their meaning - we accuse
the statistics themselves of lying to us (hence there are lies, damn lies…).

Media and cultural studies has, in many instances, taken care to avoid the first of these tendencies - cultural
studies, after all, has such a long-standing critique of empiricism that it is almost taken for granted. But, as we
shall see, it is far from immune to its own kinds of premature acceptance and dismissal. Perhaps the best example
of this is in the way audience research is commonly understood within the cultural studies field.

CONTRADICTORY OR JUST DIFFERENT? THE MEANING OF EVIDENCE ABOUT MEDIA


INFLUENCE
You have doubtless read many times - as I have - that in the great debate about the power of media to influence
people, the research data are contradictory. This assumption is then used to support various propositions, one of
which is that the use of statistics to provide evidence - one way or another - is a fruitless enterprise. And yet its
premise is often accepted uncritically. Once we begin to explore the data, it becomes clear that the idea that ‘the
evidence about media influence is contradictory’ is, like the journalistic treatment of crime figures, partly based on
a failure to seriously examine both the evidence and the way it has been produced.

If we take the time to dwell on issues of method and context - as commentaries on this topic rarely do - many of
the contradictions melt away. Part of this involves a rejection of absolutist positions, and accepting that there may
be moments when the media have an influence and moments when they may not. It also involves a
comprehensive understanding that the meaning of data is contingent upon the methodological apparatuses that
produced them.

Nearly 50 years ago, Carl Hovland conducted a review of the research now described as being in the much
maligned media ‘effects’ tradition (Hovland, 1959). He argued that the outcome of the research into media
influence depended partly on the methodologies used. Thus, he argued, experiments - based on classic social
psychological techniques of measured and controlled exposure to media -tended to show a measurable degree of
media influence (as they continue to do - see Livingstone, 1998, pp. 15–16). On the other hand, studies using
what he called ‘sample survey methods’, such as those used in the famous Lazarsfeld, Berelson and Gaudet study
of the media and political attitudes (The People's Choice, 1944), tended to report only ‘minimal effects’.

Hovland accounts for these differences by pointing out that the two methods construct very different samples
under very different conditions: as a consequence the data they produce are not necessarily contradictory at all.
So, for example, a controlled exposure to certain kinds of television programme may well produce measurable
short-term effects. Yet when these programmes are simply part of the media sphere, their longer-term effects
upon audiences in general - who may or may not be paying attention - are likely to be less measurable (and, one
might add, more difficult to reduce to measurable forms).

Hovland himself was steeped in methodological traditions that have subsequently been much criticized. Sonia
Livingstone, for example, summarizes the general problem with experimental methods, the most obvious one
being that people do not consume media in controlled conditions, and that media influence is likely to be gradual
and long-term (Livingstone, 1998: p. 16). Indeed, although researchers such as Hovland were more aware of some
of these issues than some of the potted histories allow, the methodological limits of the ‘effects’ tradition in general
are well inscribed within the history of media and cultural studies (Hall, 1980, 1994; Lewis, 1991; Morley, 1980; or
more recently, Gauntlett, 1998). But Hovland's general point - that different approaches produce different
outcomes - is one that is understood within cultural studies, but often ignored.

Indeed, the issue here is not merely one of method. The ‘contradictions’ in audience research - like the differences
in crime statistics - are often simply a function of different things being measured in different ways. So, for

http://www.sage-ereference.com/view/hdbk_culturanalysis/n31.xml Page 5 of 20
Thinking by Numbers: Cultural Analysis and the Use of Data : The SAGE Handbook of Cultural Analysis 15/10/11 15:48

example, a series of interviews might establish how people - a group of ‘fans’ perhaps - can use creative resources
to interpret particular television programmes to suit their circumstances (e.g. Hills, 2002; Hodge and Tripp, 1986;
Jenkins; 1992). The same group of people may - when asked about the risk of crime or which issue most concerns
them - unproblematically reproduce dominant media frames (e.g. Gamson, 1989, 1992; Iyengar, 1991; Kitzinger,
2000; McCombs, 1981). The fact that these findings suggest different things about media interpretation and
influence does not make them contradictory. Far from it: they allow us to understand the contextual and specific
nature of the relationship between media and audiences - thus we may record engagement and creativity in one
context and acceptance in another.

In short, the meaning of data is historical and contingent, rather than absolute. This appreciation allows us to
understand data in the context of their production, rather than to simply chalk it up on one side of the argument or
another. This is to challenge a widespread empiricist orthodoxy, and to assert that data are a form of
representation -a discourse about the world rather than a transparent reflection of it - and that the production of
data involves imposing a framework of understanding upon the subject of analysis. These statements are, from a
cultural studies perspective, uncontroversial. And yet there are a number of tendencies within cultural studies that
conspire to blunt this understanding.

EMPIRICISM AND METHOD: UNDERSTANDING QUANTITATIVE FORMS OF DATA


PRODUCTION
There is, first of all, a suspicion that statistics or other ‘scientific’ methods - such as the use of experiments - carry
a greater burden of social construction than other forms of data production. We can, of course, debate the degree
to which different research methods construct the subjectivities they report. Watching television with people in
their home, for example, may be seen as less obtrusive than putting people in a laboratory, while focus group
interviews may allow participants to set the agenda in a way that quantitative surveys do not. But all research into
human activity will play a role in shaping either that activity or the way it is reported. All methods are, in this
sense, a form of artifice. In this context, to single out certain methods as being problematic is itself problematic.
This is not an argument for abandoning method, merely for appreciating the nature of its limits.

So, for example, in a broadside against ‘effects’ research, David Gauntlett takes experimental studies on the
influence of television on children to task, on the grounds that ‘participation in an experiment, and even the
appearance of the adults involved in the study, can radically alter children's behaviour’ (Gauntlett, 2001: p. 56).
Quite so: but one could say much the same about some of the methods he proceeds to advocate - such as
encouraging ‘children to make videos themselves as a way of exploring what they got from the mass media’
(Gauntlett, 2001: p. 59). This may be a different kind of intervention, but it is an intervention nonetheless (and,
one might argue, one that is no less intrusive or instrumental, in its way, than conducting an experiment).

There is also an irony here, since the assumption lying behind this criticism is that ‘effects research’ tends to show
that people are ‘negatively affected by the media’ (Gauntlett, 2001: p. 57). While this may be true of some
studies, the classic era of effects research (apart from the experimental studies Hovland refers to) argued against
rather than for the idea of media influence. A re-reading of classic ‘effects’ studies such as The People's Choice
makes the off-repeated criticism that effects researchers treat audiences as passive cultural dupes seem decidedly
misplaced. On the contrary, their work appeared to reveal the limits of media power and the strength of existing
social networks. Indeed, from a cultural studies perspective, the problem with ‘effects’ is partly its failure to
investigate the ideological influence of the media.

So while most cultural studies researchers are lined up squarely against empiricism, the critique of statistical or
experimental methods often carries the subtle implication that there are certain privileged methods that allow us to
observe the empirical world in its natural state. This is, in practice, rejecting one form of empiricism to reinstate it
with another.

What needs stating emphatically is that empiricism is not a method, nor a consequence of methodological
decisions. We cannot say, for example, that quantitative surveys are empiricist while textual analysis or
ethnography is not. Empiricism is a way of understanding data - one that assumes the data have a transparency in

http://www.sage-ereference.com/view/hdbk_culturanalysis/n31.xml Page 6 of 20
Thinking by Numbers: Cultural Analysis and the Use of Data : The SAGE Handbook of Cultural Analysis 15/10/11 15:48

which the real world is revealed. This should be antithetical to a cultural studies approach, which sees data as a
signification rather than a reflection of the world. Too often, however, this understanding is applied rigorously in
some instances and forgotten in others. There is no reason why we cannot appreciate surveys, statistics and
experiments in ways that understand the limits of how that research was produced, just as we could, if we so
wished, apply empiricist assumptions to ethnographic or any other kind of qualitative methods.

There is, in other words, a danger of imagining there is some kind of authentic realm of human experience that
certain more naturalistic methods allow us to observe - of demarcating between methods that take us ‘closer to
reality’ and those that do not.

We may conduct interviews with groups of people who are familiar with one another in their own homes, in order
to encourage people to talk as they would normally do with their family or friends (e.g. Gillespie, 1995; Jhally and
Lewis, 1992; Morley, 1986). Since many people spend a great deal of time in these spaces, it is reasonable enough
to attempt to explore its discursive dimensions. But this does not mean that this is the special place where people's
real opinions and understandings of the world reside, or, because it is more commonplace, that this place is
necessarily the most important site of public interaction.

A response to an opinion survey may be a less typical manifestation of public opinion than a day-to-day
conversation. It is, after all, a highly scripted encounter that offers little space for freedom of expression (Lewis,
2001). As Pierre Bourdieu suggests, ‘the opinion poll would be closer to reality if it totally violated the rules of
objectivity and gave people the means to situate themselves as they do in real practice’ (Bourdieu, 1979, pp. 127–
128). But the manner in which people respond to opinion questionnaires may have identifiable political
consequence in ways that what we say to our friends and families does not (and vice versa). The conventional
response from cultural studies - which is inclined to dismiss polls as telling us little about what Bourdieu calls ‘real
practice’ -misses this point.

So, for example, it may be that there are instances when news has little impact on our day-to-day conversations,
and only becomes relevant on those rare occasions we are asked to respond to a questionnaire. But the potential
importance of those poll responses in public discourse means that this is a form of media influence worth taking
seriously. So, for example, while polls suggested that the British people were, overall, at best divided and often
opposed to the 2003 war with Iraq in the months before and after, the surge in support recorded during the initial
phase of the war itself helped to legitimate it at a crucial time. The fact that this surge was not necessarily a
collective change of mind, and that it concealed a number of ambiguities - as well as reflecting a change in media
coverage (Lewis, 2004) - is important to understand. But while we may challenge a simplistic reading of these
data, we must also acknowledge that this representation of public opinion is a matter of consequence.

More generally, many would argue that the lack of clarity in the way cultural studies deals with data suggests a
wider problem with cultural studies, which is that it is often methodologically weak. So, for example, Edward
Herman attacks the ‘active audience’ approach for giving ‘great weight to what a tiny sample of viewers say about
their perspectives on TV… fitted into a framework that takes the existing programming offerings as a given’
(Herman, 1999: p. 278). While I would defend some qualitative audience studies for their methodological probity,
one does not have to look too far to find generalizations being made on the basis of small, select samples without
proper consideration being given to the role of methods in producing certain outcomes.

There is a large volume of quantitative data that is often ignored by accounts of the field from within cultural
studies - presumably on the grounds that it is tainted by flawed theoretical assumptions - while smaller studies
with glaring methodological limits are often seen as revealing clear and identifiable truths. But if we accept that all
inquiries have their limits - regardless of whether those conducting them acknowledge them - we can proceed to
interpret the data accordingly.

CLASS, UNITY AND CULTURAL CONSUMPTION


Part of the problem here, I would suggest, is the way the presentation of statistical forms of analysis inclines
towards an emphasis on unity rather than fragmentation. Statistics are, first of all, often seen as the product of -
and are articulated through - structural unities: such as income, social class, public opinion or other forms of

http://www.sage-ereference.com/view/hdbk_culturanalysis/n31.xml Page 7 of 20
Thinking by Numbers: Cultural Analysis and the Use of Data : The SAGE Handbook of Cultural Analysis 15/10/11 15:48

classification. However much one may appreciate the complexity of social life, there is inevitably a tendency, when
summarizing data, to use descriptions that dwell more on what is common or typical, and to express this in ways
that could be seen to amplify a tendency into a rule. So, for example, there is often a slippage in discussions of
public opinion towards the use of the singular, so that a majority response is referred to as a manifestation of the
public mind, rather than as the articulation of a number of publics. These publics, in different contexts and for
different reasons, may momentarily converge upon a particular response, but this convergence is constructed by
the poll itself, and does not mean a unity of origin or purpose.

Bennett et al. make a similar point in their analysis of cultural consumption in Australia:

In the ‘narrative’ modes of presentation that we, like anyone else who attempts to describe and interpret social
processes, have necessarily used, we have tended to shift imperceptibly from talking of probabilities to talking of
the characteristics of a population: from ‘there is about a 10 per cent greater likelihood of men (defined by gender
alone without reference to other dimensions of social being) reading newspapers daily than of women (similarly
defined) doing so’, to ‘men read newspapers more frequently than women’. We substantialise a set of statistical
variations, thereby producing typifications. (Bennett et al., 1999: pp. 257–258)

As they acknowledge, this tendency to ‘typify is to reduce complexity’ (Bennett et al., 1999: pp. 257–258) had not
only ignores those - sometimes majorities - who don't express statistical trends, but neglects the other social
forces at play that do not work to support the relationships suggested by the statistical trend. So, to use their
example, gender differences in newspaper readership operate within and outside a whole variety of social
indicators. At the same time, the percentage difference between men and women in newspaper reading may well
be statistically significant - 10 per cent, for example - while ignoring the fact that most men and women conform to
similar patterns of newspaper readership.

The trick here is to consider the significance of the 10 per cent difference in the context of ‘multi-dimensional
matrices’ against the backdrop of the complexity of social life. This, indeed, is very much the thrust of their
analysis, which seeks to develop Bourdieu's analysis of cultural practices and preferences in a way that
acknowledges not only the importance of popular culture but the complexity of class formations, as well as the
other forms of difference such as gender, race and ethnicity.

Bourdieu used surveys to demonstrate ways in which the distribution of cultural preferences was linked to social
class. In so doing, he was not attempting to construct a series of statistically significant causal links to find the
ultimate causal variable. Much like Stuart Hall's move from the notion of determination to articulation (1996) he
uses data to establish clusters and associations in ways of thinking or behaving (the ‘habitus’). He was thereby
able to theorize a notion of ‘cultural capital’ -a non-economic form of distinction that nevertheless exists alongside
economic and social class relations in a relationship of mutual reinforcement. His analysis showed how arbitrary
cultural distinctions - what is good, what is bad - became embedded in class hierarchies, articulating and
reproducing class distinctions (Bourdieu, 1984).

While Bourdieu's sociological analysis was situated outside cultural studies, his analysis struck a powerful chord
within it. Indeed, Bourdieu's use of data made tangible one of the key strands in the formation of British cultural
studies, which, through the work of Raymond Williams and Richard Hoggart, displaced the essentialism of a
Leavisite view of cultural value (Leavis, 1930). Bourdieu's work does more than counter the Leavisite tradition (in
which a privileged minority are exalted as the arbiters of cultural value), it explains it, showing how the Leavisite
view of culture worked to sustain and legitimate class distinctions.

Other surveys in Britain, two decades later, produced similar findings, suggesting that cultural capital was, in its
way, every bit as robust as economic capital (Lewis et al., 1986). They suggested that, despite egalitarian aims,
even progressive local authorities - such as the Greater London Council in the 1980s - while removing economic
barriers to participation, had often failed to dislodge cultural practices that favoured those with certain forms of
cultural capital (Lewis, 1990). The point of such surveys was both critical as well as administrative, designed to
support a progressive cultural politics that acknowledged, challenged and even dismantled the distinctions
described by Bourdieu.

http://www.sage-ereference.com/view/hdbk_culturanalysis/n31.xml Page 8 of 20
Thinking by Numbers: Cultural Analysis and the Use of Data : The SAGE Handbook of Cultural Analysis 15/10/11 15:48

While Bourdieu's work in Distinction (1984) clearly informs cultural studies, its lack of engagement with popular
and mass cultural forms and its emphasis on a single axis of distinction (class) are clearly out of kilter with a post-
structuralist cultural studies concerned both with multiple sources of power and differentiation (of which class was
but one) and with the detailed analysis of popular and mass cultural forms. Bennett et al., inspired by a critical
engagement with Bourdieu, also use surveys in their mapping of cultural preferences and tastes in 1990s Australia,
but they do so in a post-structuralist context, as well as the contextual shift from 1960s France to 1990s Australia.
Despite what they call a ‘weakening of taxonomic boundaries’ (Bennett et al., 1999: p. 13), they remain committed
to the use of surveys to excavate patterns and distributions of cultural consumption.

What their data analysis indicates, indeed, is a partial displacement of social unities in the analysis of cultural taste.
So, for example, their surveys suggest important distinctions between groups often clustered together in class
groupings, notably between employers and managers on the one hand and professionals on the other. While
professionals possess cultural capital in abundance, this does not typify a more general class position. Their data
suggest that employers and managers -arguably more powerful class groupings, in economic terms - appear less
attached to the use of culture as a form of distinction. Employers, in particular, are less interested in the cultural
capital derived from an emphasis on ‘art and musical training’ in school or from ‘high-cultural activity which [is] not
expensive or socially prestigious’ (Bennett et al., 1999: p. 261). They do not share, in this sense, the professionals'
interest in cultural capital in its own right. For them, cultural capital is important only when linked to ‘social capital’,
derived from such things as a private-school education or a ticket to the theatre or the opera. Thus, while
acknowledging the privileges that enhance the acquisition of cultural capital by professionals in an information
society, Bennett et al. argue that ‘it is economic and social capital that play the major role in the generation and
reproduction of class inequality in Australia’ (Bennett et al., 1999: p. 268).

What their survey shows, in other words, are points at which the cultural, social and economic capitals overlap and
points where they diverge. In so doing, their use of surveys, far from creating structural unities, allows a more
situated and nuanced understanding of the relationship between power, class and cultural taste.

MEDIA POWER REVISITED: PROOF, PLAUSIBILITY AND THE MEDIA SPHERE


Part of the problem with the use of statistical data in cultural analysis is that regardless of the methods used, there
is a tendency to adopt a principle of certainty common in mathematics and the natural sciences. In these domains,
it is possible to control conditions and isolate variables in ways that make the notion of ‘proof’ appropriate as a
measure of success. In the social sciences - especially when we are dealing with something as elusive as human
consciousness - it may be possible to isolate certain variables, but any attempt to control conditions (as for
example, in an experiment) is itself problematic. And yet there is still an assumption that we think in binaries of
‘proven’ or ‘not proven’, when we might be better off identifying how best to explain the patterns we find. So, for
example, it is a point often made that a correlation - no matter how well established - does not prove causality. A
correlation between choosing to view violent programmes and enacting violent behaviour does not prove that the
first causes the second - or vice versa. What we should ask, in the context of all that we know about culture and
society, is what are the most plausible explanations for this relationship?

So, for example, a study by Stephen Kull and colleagues at the Program on International Policy Attitudes (Kull,
2003; Kull et al., 2003) found that there was a correlation between watching Fox News and belief in certain myths
about Iraq (assuming, for example, a direct connection between Iraq and the September 11th terrorist attacks).
After conducting a regression analysis, the author's found that:

Fox is the most consistently significant predictor of misperceptions. Those who primarily watched Fox were 2.0
times more likely to believe that close links to al Qaeda have been found, 1.6 times more likely to believe that
WMD had been found, 1.7 times more likely to believe that world public opinion was favourable to the war, and 2.1
times more likely to have at least one misperception. (Kull et al., 2003: p. 589)

These links held up regardless after exploring a number of other plausible explanations. So, for instance, the
correlation could not be explained by the fact that Fox viewers were more likely to support President Bush:

For example, 78% of Bush supporters who watch Fox News thought the US has found evidence of a direct link to

http://www.sage-ereference.com/view/hdbk_culturanalysis/n31.xml Page 9 of 20
Thinking by Numbers: Cultural Analysis and the Use of Data : The SAGE Handbook of Cultural Analysis 15/10/11 15:48

al-Qaeda, but only 50% of Bush supporters in the PBS and NPR audience thought this. On the other side, 48% of
Democrat supporters who watch Fox News thought the US has found evidence of a direct link to al-Qaeda, but not
one single respondent who is a Democrat supporter and relies on PBS and NPR for network news thought the US
had found such evidence. (Kull et al., 2003: p. 583)

What was particularly striking, in this context, was that in ‘the case of those who primarily watched Fox, greater
attention to news modestly increased the likelihood of misperceptions’ (Kull et al., 2003: p. 586). Moreover, they
found that people who held these misconceptions were, as we might expect, more likely to support the 2003 war in
Iraq (similar kinds of findings were reported by Morgan, Lewis and Jhally, 1992, in their analysis of support for the
1991 war with Iraq).

These data, in conjunction with studies of content - which show that Fox paid less attention to criticisms of pro-war
arguments than other networks (Rendell and Broughel, 2003) - suggest that one of the most plausible conclusions
we can draw is that Fox News played a role in misinforming the public about Iraq, and that these misconceptions
are likely to have made it easier for people to support the case for war. And yet to make such an assertion seems
alarmingly bold in a culture where the benefit of the doubt falls against such an assertion of clear media influence.
Since these correlations do not prove causality, the most plausible explanation for these findings - one that
strongly suggests a very specific form of media influence - can be simply dismissed as ‘case not proven’. At the
same time, other explanations for the correlation are not subject to the same degree of proof or plausibility. In
short, inquiries into media influence tend to begin with the prevailing judicial assumption that influence must be
proved, while non-influence must not.

It was for this reason that, after their report caused a stir in the US media, Stephen Kull felt obliged to muddy the
waters, and offer other kinds of explanation for the data. As he put it in an interview with the press, their data did
not necessarily mean that Fox's coverage was creating misconceptions: ‘It can be that they (Fox) attract a kind of
viewership that doesn't like to pay attention to disconfirming evidence. Their audience is more prone to self-
delusion. That's a possibility’ (The San Diego Union-Tribune, October 14, 2003). A possibility indeed - but is this
explanation more or less plausible? And surely, even if we accept other possibilities, to suggest that Fox News
played no part in fostering misconceptions about Iraq is, in the context of these data, highly implausible.

Part of the problem here is that the correlations found in surveys are nearly always tendencies rather than
certainties. So, for example, in explaining why 60 per cent of heavy television viewers think X, how do we account
for the 40 per cent who do not? As Bennett et al., point out: ‘much of what is most interesting in our data …
happens non-typically: adolescents who read extensively or listen to classical music; manual workers who watch
stock-car racing and play the violin (Bennett et al., 1999: p. 258). In short, while the typicalities suggest patterns
and relationships, the non-typical reveals their fragility. Quantitative social science likes to uncover rules and
relationships that allow it to predict outcomes, but these predictions will usually be about what is likely rather than
what is inevitable.

The notion of plausibility is, of course, neither abstract nor vague. My use of it here is in contrast to ideas such as
‘common sense’, or ‘conventional wisdom’, which can be self-justifying and tautological, and which are often
offered instead of a theoretically informed body of evidence (Parry, 1992). There are no absolutes here: plausibility
simply implies an attention to the weight of evidence and a knowledge of appropriate theoretical models, forms of
enquiry and analysis.

In sum, this is an area where scientific forms of ‘proof’ are likely to be elusive, and where there will nearly always
be patterns and relationships that do not fit the explanations we uncover. So, for example, since we are never
going to be able to watch media influence happening before our very eyes, and since the ubiquity of media makes
the use of control groups difficult, a plausible explanation is about as close as we can usually get. Because it is so
difficult to control conditions and isolate media impact in ways that show causality, we have, for too long in my
view, either erred on the side of caution or assumed that what is difficult to measure is probably not there anyway.

On a broader level, one of the reasons why so many different assertions are made about media influence is that we
lack the kind of systematic meta-analysis - one sympathetic to a cultural studies approach to the contextual and

http://www.sage-ereference.com/view/hdbk_culturanalysis/n31.xml Page 10 of 20
Thinking by Numbers: Cultural Analysis and the Use of Data : The SAGE Handbook of Cultural Analysis 15/10/11 15:48

historical meaning of data - that Carl Hovland attempted in the 1950s. While I would not want to pretend that my
own reading of the field is comprehensive, I would argue that we now have enough information to appreciate many
of the complexities of the relationship between media and audiences.

This means abandoning the binaries of the debate about media and culture, whether it takes the forms of ‘active
audiences’ versus media effects, or cultural imperialism versus localized meaning-making. In other words, it is
reductive - and implausible - to simply see evidence in terms of whether it adds weight to the case for or against a
relationship. In the case of media influence, for example, we need to abandon, once and for all, the question of
whether the media are influential and use our interpretative skill to examine how they are influential. Thus we need
to move beyond the idea that the evidence is contradictory, and begin to assert the various contexts in which
media influence is likely - or unlikely - to manifest itself.

There is a body of work that already does this: indeed, this is very much the spirit of Stuart Hall's
encoding/decoding model (Hall, 1980) and David Morley's well-known articulation of it (Morley, 1980) over a
quarter of a century ago. Indeed, it has been argued that the emphasis on active audiences in cultural studies has,
regardless of its merits, dragged the debate backwards towards old binaries. Part of the problem, perhaps, is the
focus within cultural studies on the semiotic relationship between texts and audiences (which is the concern of both
the encoding/decoding model and active audience theory). As important and intriguing as this relationship is, it
deals with only one moment of media influence, and thereby pushes an empirical focus onto individual texts. In so
doing, it shares some of the limits of the psychological experiment.

In an increasingly media-saturated world, this emphasis is harder to sustain. While it is hardly new to say that
media influence is likely to be long-term and diffuse (Dyer, 1982), it also likely that media are influential in their
collectivity rather than in isolation. So, for example, the ability of an individual media text to be influential partly
depends upon its position in the whole media sphere. This was one of the advances made by cultivation analysis,
which sought to move away from a text-based model of effects and create a statistical, sample survey model that
might overcome some of the flaws of the traditional ‘mimimal effects’ studies.

What mattered, according to cultivation theory, was not individual television programmes, but those stories
repeated across television as a whole (Signorielli and Morgan, 1990). However, despite television's continuing
importance, even this idea relies on a distinction between television (and hence ‘light’ and ‘heavy’ TV viewers) and
other media - a distinction that is increasingly difficult to sustain. If, for example, cultivation analysis finds little
difference between ‘light’ and ‘heavy’ TV viewers, this is put down in the column against media influence. And yet
the ‘light’ TV viewer may be getting much the same messages from other media (new or old) as the heavy viewer
gets from television. This will lessen the differences between the two groups, while allowing for the possibility of a
more comprehensive form of media influence (i.e. one that is not limited to television), that acknowledges, as John
Corner puts it, ‘media's consequentiality within society’ rather than ‘media's consequentiality upon society’ (Corner,
2000: p. 394). Indeed, as research into the political economy of the media suggests, the proliferation of media
outlets has coincided with a concentration of ownership, so that light television watchers may well encounter much
the same content elsewhere (McChesney, 2000). This, in turn, diminishes our ability to isolate television in
explorations of media influence.

The problem with cultivation analysis, in this sense, is that it is only equipped to measure television's influence in
those cases where the stories television tells are consistently different from those stories told by other media.
Unfortunately, life gets much more difficult for the cultural analyst if we move away from media texts to an
examination of the whole media sphere - or alternatively, to tracking which parts of the media sphere tend to be
used by particular social groups (as the Kull et al. (2003) study attempted to do). Not only does it increase the size
of the object of analysis to an almost unmanageable degree, it makes it even more difficult to isolate variables
across the media sphere.

This stacks the deck against attempts to reveal media influence in action: there will always be something outside
the remit of a research study that might provide another explanation of the findings. So, for example, it is possible
that cultivation analysis, by focusing on those aspects of media content that distinguish television from other
media, tends to downplay its influence as part of the broader media sphere. And yet, to date, criticisms of

http://www.sage-ereference.com/view/hdbk_culturanalysis/n31.xml Page 11 of 20
Thinking by Numbers: Cultural Analysis and the Use of Data : The SAGE Handbook of Cultural Analysis 15/10/11 15:48

cultivation tend to want to limit the claims cultivation can make, rather than argue that their methods may actually
understate media influence (e.g. Hirsh, 1980). Once again, our watchword should be plausibility rather than proof.
But it is also here that numbers and statistics become not only useful but necessary. In short, if we are to grasp
the relationship between large audiences and the media sphere, we must begin to look for numerical patterns.

The same might be said for cultural analysis in general. While there may be much to observe at the level of the
individual instance, if we are interested in culture we must also be interested in what these instances do - or do not
- amount to. This is not an argument for abandoning more qualitative forms of analysis, but for complementing
them with quantitative forms of investigation. In the final section, I revisit two forms of quantitative analysis that
might be relevant to such an exercise (both forms that cultural studies has tended to ignore): content analysis and
survey questionnaires.

THINKING BY NUMBERS 1: CONTENT ANALYSIS


Content analysis has often been regarded as too crude a device to tell us much about the nature of media texts,
whose complexity is seen as irreducible to a series of simple descriptive categories. So, for example, translating a
textual entity into statistical patterns, as content analysis tends to do, does not tell us how the various textual
elements form a narrative, or how they combine to create contexts or impressions (what, for example, Teun van
Dijk refers to as the ‘local coherence’ and ‘global coherence’ of a text - van Dijk, 1991) . And yet the limits of
content analysis tend to be self-imposed, partly because content and textual analysis play by a very different set of
rules.

Because, as John Hartley puts it, textual analysis is ‘not a scientific method . with testable observations and
generalizable, predictive results’, but rather an approach that allows us to investigate ‘questions of power,
subjectivity, identity and conflict’ (Hartley, 2002: p. 31), it allows the analyst a degree of freedom. This is not
merely poetic licence: our understanding of narrative structure, visual imagery, and the full complexity of elements
that create moods and impressions - from editing techniques to the use of music -gives us access to forms of
understanding and appreciation that are difficult to reproduce in simple formulae. So, for example, it takes a high
degree of cultural competence to understand a statement as ironic (irony often being a subversion of appearance).
For this reason, the absence or presence of irony in a scene may be a consequence of nuances that are too
contextual or incidental to find their way onto a coding sheet. Textual analysis, being less obviously concerned with
scientific procedure, allows us to use our cultural skill to immediately see the difference.

In this sense, the ‘subjectivity’ of textual analysis can be an advantage, since subjectivity here is not necessarily a
question of doing as we please, but of making full use of our skill as cultural subjects. Textual analysis may use
structures or methods -such as discourse analysis or semiotics - but their purpose is to deepen rather than confine
exploration. Content analysis, on the other hand, is burdened by notions of objectivity, and thus tends to avoid
categories that involve a high degree of evaluative judgement or that might involve ambiguities. The notion of
‘inter-coder reliability’ means that the more open, ambiguous or complex aspects of a text - where the rules of
interpretation are more sophisticated - tend to be ignored. As a consequence, the more difficult aspects of media
texts are generally left to textual analysis.

But if we free ourselves from some of the rigidities of procedure, the difference between content and textual
analysis is not a question of subjectivity and objectivity: both forms are interpretative, and both make truth claims.
Both approaches, in some ways, attempt to speak about the significance of a text in the culture, and both,
therefore, succeed or fail by the same standard. The difference between them is essentially one of language: to
use a metaphor from new media, we might see textual analysis as an analogue form and content analysis as a
digital form. The textual analyst is dealing with the raw text itself, with chunks of faithful transcription, while
content analysis is trying to transform the text into a set of binaries. Both are trying to represent an original
object, but they use very different languages to do so.

The point of this metaphor is that a binary system - one that can turn a symphony into a computer language and
back into a symphony again - is not necessarily reductive, and is capable of great complexity. There is, in other
words, nothing about content analysis as a technology that prevents it from embracing the complexities explored

http://www.sage-ereference.com/view/hdbk_culturanalysis/n31.xml Page 12 of 20
Thinking by Numbers: Cultural Analysis and the Use of Data : The SAGE Handbook of Cultural Analysis 15/10/11 15:48

by textual analysis. It is difficult, certainly, just as it is difficult to design a computer programme to beat a highly
skilled chess-player (whose expertise allows them to take all sorts of shortcuts to see a move, while the machine
must run all the computations).

It is, in theory, possible to design a content analysis with enough subtlety to spot something as culturally specific
as a moment of irony. Indeed, content analysis is only a partially digital form, since it is performed by people
equipped with the cultural resources to tick the irony box. What content analysis does require is the anticipation of
significant content and a common set of understandings about its interpretation. At its best, content analysis is
simply a textual analysis that can be reproduced across a number of texts.

Indeed, it could be argued that the whole point of textual analysis in cultural analysis (as opposed to, say, literary
criticism) is precisely its reproducibility - its ability to tell us not just about a single text but about textual moments
that are spread more widely across the culture. So, for example, van Dijk's analysis of news discourse attempts to
shed light on the way news in general works ideologically (e.g van Dijk, 1988), while Jonathan Gray's analysis of
The Simpsons is intended as an exploration of the political and cultural significance of parody (Gray, 2005). What
content analysis allows us to do is to explore the degree to which this is so, to begin to map the media sphere to
see which ideas and discourses are repeated, which are presented as normative or natural and which are
marginalized or excluded. Without this kind of quantification, we have no way of establishing the typicality,
prominence or purchase of ideas or discourses.

Although a traditional content analysis is often about body counts - who is being represented and how - it is
equally capable of being discursive. When colleagues and I conducted a content analysis of the war in Iraq, for
example, we were concerned not only with the conventional categories for coding news (such as the use of news
sources) but with the way in which the coverage encouraged or discouraged certain assumptions. In particular, we
wanted to see whether two key aspects of the pro-war discourse - the assumptions that Iraq possessed ‘weapons
of mass destruction’, or that the Iraqis themselves were happy to see the US-led forces overthrowing the regime -
were reinforced or questioned by the coverage. In so doing, we were obliged to consider the different discursive
forms these assumptions might take, and to turn this rhetorical analysis into quantifiable data. Thus our analysis
enabled us to establish the extent to which the coverage was more likely to reinforce than undermine pro-war
assumptions (Lewis and Brookes, 2004).

The Glasgow Media Group's approach to content analysis has, for some time, attempted to use an understanding of
ideological context and textual structures to capture certain kinds of discursive elements in news broadcasts (e.g.
Glasgow Media Group, 1976). While they are rarely credited for doing so, their approach moved studies of media
‘bias’ far away from the simplicity of stopwatches (to measure, for example, whether both sides have equal say) to
a more sophisticated analysis of the way in which ideology is inscribed within texts. Their analysis of the coverage
of the Israeli-Palestinian conflict, for example, established that Israeli attacks on the Palestinians were often
referred to in news reports as in response to or in retaliation for earlier Palestinian attacks, while Palestinian
attacks were rarely described in this way (Philo and Berry, 2004). This is the kind of inflection that we might find in
a discourse or rhetorical analysis, since it shows how the use of language in specific socio-political contexts has
ideological consequences (thus, without showing any overt bias, journalists may be conveying the impression that
one side is more rational and more justified than the other). But their ability to quantify this inflection makes their
data relevant not just to an individual text, but to the media sphere (and hence less speculative about the
information available to citizens).

This more discursive form of content analysis makes the labour of coding less routine and more interpretive -
something that various institutional traditions tend to work against. So, for example, textual analysis is generally
regarded as a product - and measure - of academic expertise. The interpretive work of coding for a content
analysis, on the other hand, is seen as more mundane and routine - skilled labour certainly, but lower down the
pecking order, and not the province of experts or accomplished scholars. There are practical reasons for this -not
least, because the repetitions that allow content analysis to produce statistical data can become routine. The notion
of inter-coder reliability then acts as an incentive to limit the amount of interpretive skill required for the work - a
lowest common denominator designed not to exploit human ingenuity but limit human error.

http://www.sage-ereference.com/view/hdbk_culturanalysis/n31.xml Page 13 of 20
Thinking by Numbers: Cultural Analysis and the Use of Data : The SAGE Handbook of Cultural Analysis 15/10/11 15:48

One way to avoid this tendency - and allow greater scope for content analysis -is to make the process of coding a
more collective, reflexive practice. This involves making inter-coder reliability part of the process, rather than
simply a separate form of scientific control. The coding team - including those who designed the coding frame - can
thus work collectively to establish procedures for interpreting textual features during the process itself, allowing
them to interpret ambiguous or complex moments consistently (see, for example, Lewis et al., 2005). This is a
departure from the standard scientific practice, with its emphasis on isolating (rather than including) control
mechanisms, but it also allows us to understand the terms by which reliability is defined.

Part of the issue here, of course, is that all these procedures are, in the end, readings of a text, and therefore
subject to the multi-dimensionality of audience interpretation. Neither form of analysis is any guarantee of the way
in which these texts are received, understood or used, which is why an understanding of media reception will
enrich a textual or content analysis.

THINKING BY NUMBERS 2: SURVEYS


The relationship of surveys to more qualitative approaches to research (such as ethnographic studies or focus
group interviews) is much like the relationship between content analysis and textual analysis. Cultural studies has
tended to prefer the more qualitative methods while seeing the more quantitative approaches as reductive. And
there is no doubt that a pre-coded questionnaire creates a highly contrived, tightly scripted conversation,
dominated by the concerns of the researcher rather than the respondent (Lewis, 2001). It is also clear that
qualitative methods are likely to reveal far more than survey questionnaires about the way culture works or is used
and understood.

What survey questionnaires allow is a way of understanding the extent and ideological specificities of cultural
practices and understandings - something that is generally beyond the reach of qualitative methods. This includes
some of what Robert McChesney calls the ‘big issues’ in media and cultural studies, such as an investigation of
which ideas and assumptions media foster or encourage and which they do not (McChesney, 1996).

So, for example, once we accept - on the basis of the available evidence - that media influence is neither
monolithic nor marginal, the question becomes one concerning the role of media in the nitty gritty politics of
everyday history. In short, what role do the media play - right here, right now - in shaping the way we understand
the world?

As cultural studies, cultivation analysis and agenda-setting research all insist, the question of media influence
works in conjunction with an understanding of media content (Gerbner, Gross, Morgan and Signorielli, 1980, 1986;
Iyengar and Kinder, 1987). This relationship is symbiotic rather than linear, since our understanding of the
significance of media content - and thus what to look for - comes from an appreciation of how that content is
understood.

Indeed, part of the problem with survey-based audience research in the past has been its lack of engagement with
media textuality and content. Thus, in traditional effects research, the emphasis has often been on the way
outcomes - opinions or behaviours -may or may not be correlated with media consumption, without exploring the
nature of media content or how that content might be used (Lewis, 2001). There is, I suggested earlier, nothing
intrinsically wrong with this approach, but it does mean that attempts to fill in the gap between consumption and
outcomes are far more speculative than they might be.

So, for example, research on news suggests that, in the absence of specific knowledge about an issue, we are
informed less by specific programmes or narratives and more by oft-repeated templates, frameworks, associations
or juxtapositions (Iyengar, 1991; Kitzinger, 2000; Lewis, 2001). Once we have absorbed these frameworks and
associations, with only a limited stock of information available, we use them to understand the world in ways that
may go beyond their use in media texts. When we are asked to think about social causality (or, as Shanto Iyengar
puts it, who is responsible), we can sometimes reproduce a thematic link as a causal link.

In other words, the news media do not have to assert a causal relationship for one to be understood. So, for
instance, research suggests that the depletion of the ozone layer and climate change are often juxtaposed in media

http://www.sage-ereference.com/view/hdbk_culturanalysis/n31.xml Page 14 of 20
Thinking by Numbers: Cultural Analysis and the Use of Data : The SAGE Handbook of Cultural Analysis 15/10/11 15:48

discussions of the environment -as two man-made and hazardous forms of environmental change. When asked
about the primary cause of climate change, most people in Britain do not have specific knowledge of the mechanics
of the greenhouse effect, and therefore tend to assume that one is the cause of the other - that global warming is
a consequence of heat getting through the hole in the ozone layer (Hargreaves et al., 2003). We thereby use oft-
repeated associations as building blocks to make knowledge claims which may exceed anything found in media
content.

Similarly, the fact that many people in the USA assumed that Saddam Hussein was responsible for the attacks on
September 11th, 2001 (Kull, 2003) was not necessarily because they were told this was the case (although some
members of the US administration undoubtedly did propose such a link at various times), but may have been
because Iraq and the September 11th attacks were often simply juxtaposed in speeches and discussions about
terrorism. Audience activity is often assumed to be empowering, but in this instance it is simultaneously both
productive and reductive. It can also be predictable, rather than idiosyncratic and imaginative. Once we understand
how this works, we can use content analysis and surveys to trace the relationship between media content and
audience understanding.

This is quite different from a simple transmission model of media influence, which assumes no audience activity
beyond the ability to reproduce the messages they receive. What questionnaires allow us to do is to explore, across
social groups, the informational context in which attitudes and opinions are formed (Lewis, 2001). Indeed, it is
when opinion pollsters stray into the realm of people's knowledge claims - rather than just their opinions - that
they are most likely to provide an insight into the relationship between media consumption and opinion. So, for
example, it seems plausible to link the negative campaign against asylum seekers in sections of the popular British
press (Speers, 2001) with increases in public concern about asylum seekers. But it was when Mori asked people
what they knew about asylum seekers that this relationship became more tangible. Mori found that most people
tended to significantly overstate the proportion of asylum seekers in Britain, as well as the amount they receive in
benefit (Mori, 2000).

We therefore have an informational context that favours the development of negative attitudes towards what is
perceived as liberal policy on this issue. It is not so much that sections of the press have foisted their opinions on
people, but that they have informed a climate in which certain opinions become more plausible. Thus we can see
that press coverage has tended to focus on the idea that both the numbers of and state support given to asylum
seekers and refugees is excessive, and it is this information that appears to have informed public perceptions. This,
in turn, gives us a persuasive explanation of the relationship between media coverage and public opinion.

It is in this context that one of the most interesting data sets produced in recent years has come from the Program
on International Policy Attitudes at the University of Maryland. The focus of many of their studies has been to try
and sketch out the knowledge claims and assumptions that inform public opinion in the USA: to look, for example,
at the different ways in which those who voted for Bush and those who voted for Kerry in the 2004 election
understood the world. This gives us a much closer glimpse of the way in which opinion formation works - and thus,
potentially, the precise nature of media influence - than the more opaque data sets produced by opinion polls.

GO FORTH AND QUANTIFY


In an episode of the BBC's satirical sitcom In the Thick Of it, a Minister and his political advisers decide to ‘focus
group’ (for this is a world where the noun has become a verb) a rather hastily developed policy proposal. The focus
group has a star performer - one who has appeared in many previous focus groups - a woman who is identified as
being the typical voice of ‘middle England’. Indeed, her ‘voice’ is seen as such an authentic expression of a key
demographic that the political advisers decide to dispense with the other members of the focus group entirely, and
interview the woman on her own. When she offers enthusiastic support for their proposal, they decide to proceed,
confident that it will have public support. Later it becomes apparent that she is, in fact, an actress and not a ‘real
person’ at all, leaving the policy process in disarray.

The episode is intended as a commentary on modern political life, but it also parodies a certain form of analysis, in
which the quantitative tools of social science are replaced by other, less methodical forms of expertise. This is

http://www.sage-ereference.com/view/hdbk_culturanalysis/n31.xml Page 15 of 20
Thinking by Numbers: Cultural Analysis and the Use of Data : The SAGE Handbook of Cultural Analysis 15/10/11 15:48

taken to apparently absurd limits when one woman is seen to represent an entire social group (‘middle England’), a
group that, because of its perceived political importance, is regarded as emblematic of the nation. In short, policy
is based on research with a sample of one.

It would be wrong, however, to see this as simply an argument for the use of representative samples. On the
contrary, part of the problem here is the way in which quantitative assumptions are used. The idea that there is a
key group of voters inhabiting - either literally or figuratively -a place called ‘middle England’ is based more on a
set of hunches than any sustained analysis. The forms of analysis best equipped to puncture the myth of ‘middle
England’ (or, for that matter, ‘middle America’) are, like the concept itself, quantitative. An analysis of opinion poll
data, for example, suggests that the notion of ‘middle England’ tends to highlight only those moments when
majority opinion tends to be moderate or conservative, while ignoring a number of instances where majorities
favour more progressive political positions (Lewis et al., 2005).

The notion of ‘middle England’ has partly been constructed by those newspapers - such as the Daily Mail - which
claim to speak on its behalf. But this notion can only be contested by interrogating the statistical claims being
made, and recasting the meaning of collective opinions. Thus we might compare various collectivities - such as the
opinions of Daily Mail readers and the opinions consistently expressed by the newspaper.

In this way, quantitative data sets become part of a cultural analysis that tries to understand, at the level of
society and the broader media sphere, the ‘questions of power, subjectivity, identity and conflict’ that John Hartley
refers to (Hartley, 2002: p. 31). This means retaining and using, rather than abandoning, the insight provided by
qualitative forms of analysis, and refusing the essentialism that fixes theory to method.

As Bennett et al. suggest, ‘the problem for all social analysis is to find a way between the overwhelming, disparate,
chaotic … mass of particulars and the narrative of forces, causes and directions which constitutes the logic of social
explanation’ (Bennett et al., 1999: p. 258). Quantification is one important way to begin this process. For without
quantification there are no comparisons, no typicalities, no patterns, no probabilities. While we can observe social
forces without these things, it is hard to assess their significance. But, as with all forms of analysis, we must not be
seduced by the neatness of its lines or its structural solidity. The explanations it allows us are interpretative and
without guarantees.

NOTES
1 Although, of course, we need data - about campaign funding and so forth - to make this case -a point I shall
return to.

2 Perhaps appropriately, this figure is incorrect -the BCS interviews 40,000 people.

REFERENCES
Barker, M. (2001) ‘The Newson Report: a case study in “common sense”’ , in M. Barker, ed. and J. Petley (eds), Ill
Effects. London: Routledge.

Barker, M. and Petley, J. (2001) III Effects. London: Routledge.

Bennett, T. , Emmison, M. and Frow, J. (1999) Accounting for Tastes: Australian Everyday Cultures. Cambridge:
Cambridge University Press.

Bourdieu, P. (1979) ‘Public opinion does not exist’ , in A. Mattelart, ed. and S. Siegelaub (eds), Communication and
Class Struggle. NewYork: International General.

Bourdieu, P. (1984) Distinction: A Social Critique of the Judgement of Taste. Cambridge: Harvard University Press.

Buckingham, D. (2001) ‘Electronic child abuse? Rethinking the media's effect on children’ , in M. Barker, ed. and J.
Petley (eds), Ill Effects. London: Routledge.

http://www.sage-ereference.com/view/hdbk_culturanalysis/n31.xml Page 16 of 20
Thinking by Numbers: Cultural Analysis and the Use of Data : The SAGE Handbook of Cultural Analysis 15/10/11 15:48

Corner, J. (2000) ‘Influence: the contested core of media research’ , in J. Curren, ed. and M. Gurevitch (eds), Mass
Media and Society. 3rd edn. London: Edward Arnold, pp. pp. 376–397.

Dodd, T. , Nicholas, S. , Povey, D. and Walker, A. (2004) Crime in England and Wales, 2003/2004. Home Office
Statistical Bulletin 10/04. London: Home Office.

Dyer, G. (1982) Advertising as Communication. London: Methuen.

Edelman, M. (1964) The Symbolic Uses of Politics. Urbana: University of Illinois Press.

Edelman, M. (1988) Constructing the Political Spectacle. Chicago: University of Chicago Press.

Foucault, M. (1977) Discipline and Punish: The Birth of the Prison. New York, Pantheon.

Gamson, W. ‘News as framing’ American Behavioral Scientist vol. 33 pp. 157–161. (1989)

Gamson, W. (1992) Talking Politics. Cambridge: Cambridge University Press.

Gauntlett, D. (1998) ‘Ten things wrong with the “effects” model’ , in R. Dickinson, ed. , R. Harindranath, ed. and O.
Linne (eds), Approachesto Audiences. London: Arnold.

Gauntlett, D. (2001) ‘The worrying influence of “media effects” studies’ , in M. Barker, ed. and J. Petley (eds), Ill
Effects. London: Routledge.

Gerbner, G. , Gross, L. , Morgan, M. , and Signorielli, N. ‘The mainstreaming of America’ Journal of Communication
pp. 30. (1980)

Gerbner, G. , Gross, L. , Morgan, M. and Signorielli, N. (1986) ‘The dynamics of the cultivation process’ , in J.
Bryant, ed. and D. Zillman (eds), Perspectives in Media Effects. New Jersey: Erlbaum.

Gillespie, M. (1995) Television, Ethnicity and Cultural Change. London: Routledge.

Glasgow Media Group (1976) Bad News. London: Routledge & Kegan Paul.

Gray, J. (2005) Watching with The Simpsons: Television, Parody, and Intertextuality. London: Routledge.

Hacking, I. (1975) The Emergence of Probability. Cambridge: Cambridge University Press.

Hacking, I. (1990) The Taming of Chance. Cambridge: Cambridge University Press.

Hall, S. (1980) ‘Encoding/decoding’ , in S. Hall et al. (eds), Culture, Media, Language. London: Hutchinson.

Hall, S. (1994) ‘Reflections upon the encoding/decoding model: an interview with Stuart Hall’ , in J. Cruz, ed. , and
J. Lewis (eds), Reading, Viewing, Listening. Boulder: Westview, pp. pp. 253–274.

Hall, S. (1996) ‘The problem of ideology: Marxism without guarantees’ , in D. Morley, ed. and K.-H. Chen. (eds),
Stuart Hall, Critical Developments in Cultural Studies. London: Routledge.

Hargreaves, I. , Lewis, J. and Speers, T. (2003) Towards a Better Map: Science, the Public and the Media.
Economic and Social Research Council.

Hartley, J. (1996) Popular Reality: Journalism, Modernity, Popular Culture. London: Edward Arnold.

Hartley, J. (2002) ‘Textual analysis’ , in T. Miller (ed.), Television Studies , London: BFI. p. pp. 31.

Herbst, S. (1993) Numbered Voices. Chicago: University of Chicago Press.

Herman, E. (1999) The Myth of the Liberal Media. New York: Peter Lang.

Hills, M. (2002) Fan Cultures. London: Routledge.

http://www.sage-ereference.com/view/hdbk_culturanalysis/n31.xml Page 17 of 20
Thinking by Numbers: Cultural Analysis and the Use of Data : The SAGE Handbook of Cultural Analysis 15/10/11 15:48

Hirsch, P. ‘The “scary world” of the non-viewer and other anomalies - a re-analysis of Gerbner et al.'s findings in
cultivation analysis, Part 1’ Communication Research vol. 7 no. (4). (1980)

Hodge, B. and Tripp, D. (1986) Children and Television. Cambridge: Polity Press.

Hovland, C. ‘Reconciling conflicting results derived from experimental and survey studies of attitude change’ The
American Psychologist vol. 14 pp. 8–17. (1959)

Iyengar, S. (1991) Is Anyone Responsible? Chicago: University of Chicago Press.

Iyengar, S. and Kinder, D. (1987) News that Matters. Chicago: University of Chicago Press.

Jenkins, H. (1992) Textual Poachers: Television Fans and Participatory Culture. New York: Routledge.

Jhally, S. and Lewis, J. (1992) Enlightened Racism: The Cosby Show, Audiences, and the Myth of the American
Dream. Boulder: Westview.

Kitzinger, J. ‘Media templates: patterns of association and the (re)construction of meaning over time’ Media,
Culture and Society vol. 22 no. (1) pp. 61–84. (2000)

Kritzer, H. ‘The data puzzle: the nature of interpretation in quantitative research’ American Journal of Political
Science vol. 40 no. (1) pp. 1–32. (1996)

Kull, S. (2003) Misperceptions, The Media and the Iraq War. Program on International Policy Attitudes: University
of Maryland.

Kull, S. (2004) The Separate Realities of Bush and Kerry Supporters. Program on International Policy Attitudes:
University of Maryland.

Kull, S. , Ramsay, C. , and Lewis, E. ‘Misperceptions, the media, and the Iraq War’ Political Science Quarterly vol.
118 no. (4) pp. 570–598. (2003)

Larson, S. ‘Misunderstanding margin of error’ The Harvard Journal of Press/Politics vol. 8 no. (1) pp. 66–80. (2003)

Lazarsfeld, P. and Katz, E. (1955) Personal Influence. Glencoe Free Press.

Lazarsfeld, P. , Berelson, B. and Gaudet, H. (1944) The People's Choice. New York: Columbia University Press.

Leavis, F. R. (1930) Mass Civilization and Minority Culture. Cambridge: Cambridge University Press.

Lewis, J. (1990) Art Culture and Enterprise: The Politics of Art and the Cultural Industries. London: Routledge.

Lewis, J. (1991) The Ideological Octopus: Explorations into the Television Audience. New York: Routledge.

Lewis, J. ‘What counts in cultural studies’ Media, Culture and Society no. Winter pp. 83–98. (1996)

Lewis, J. (2001) Constructing Public Opinion. New York: Columbia University Press.

Lewis, J. ‘Television, public opinion and the war in Iraq: the case of Britain’ International Journal of Public Opinion
Research vol. 16 no. (3) pp. 295–310. (2004)

Lewis, J. and Brookes, R. (2004) ‘How British television news represented the case for the war in Iraq’ , in S. Allan,
ed. and B. Zelizer (eds), Reporting War: Journalism in Wartime. London and New York: Routledge.

Lewis, J. , Cushion, S. , and Thomas, J. ‘Immediacy, convenience or engagement? An analysis of 24-hour news
channels in the UK’ Journalism Studies vol. 6 pp. 461–477. (2005)

Lewis, J. , Inthorn, S. and Wahl-Jorgensen, K. (2005) Citizens or Consumers: The Media and the Decline in Civic
Participation. McGraw-Hill.

http://www.sage-ereference.com/view/hdbk_culturanalysis/n31.xml Page 18 of 20
Thinking by Numbers: Cultural Analysis and the Use of Data : The SAGE Handbook of Cultural Analysis 15/10/11 15:48

Lewis, J. , Morley, D. and Southwood, R. (1986) Art - Who Needs Its? The Audience for Community Arts. London:
Comedia.

Livingstone, S. (1998) Making Sense of Television. London: Routledge.

McChesney, R. ‘Is there any hope for cultural studies?’ Monthly Review vol. 47 no. (10) pp. 1–18. (1996)

McChesney, R. (2000), Rich Media, Poor Democracy. Chicago: University of Illinois Press.

McCombs, M. (1981) ‘The agenda-setting approach’ , in D. Nimmo, ed. and K. Saunders (eds), Handbook of
Political Communication. California: Sage.

Moretti, F. ‘Graphs, maps, trees - abstract models for literary history’ New Left Review pp. 24. (2003)

Morgan, M. , Lewis, J. and Jhally, S. (1992) ‘More viewing, less knowledge’ , in H. Mowlana, ed. , G. Gerbner, ed.
and H. Schiller (eds), Triumph of the Image: The Media's War in the Persian Gulf - A Global Perspective. Boulder:
Westview Press.

Mori (2000) Are We an Intolerant Nation? , http://www.mori.com/polls/2000/rd-july.shtml.

Morley, D. (1980) The Nationwide Audience. London: British Film Institute.

Morley, D. (1986) Family Television. London: Comedia.

Parry, R. (1992) Fooling America. NewYork: Morrow.

Philo, G. and Berry, M. (2004) Bad News from Israel. London: Pluto Press.

Rendell, S. and Broughel, T. ‘Amplifying officials, squelching dissent’ Extra! (2003) May/June. Retrieved from http:/
/www.fair.org/extra/0305/warstudy.html.

Salmon, C. and Glasser, T. (1995) ‘The politics of polling and the limits of consent’ , in C. Salmon, ed. and T.
Glasser (eds), Public Opinion and the Communication of Consent. New York: Guilford Press.

Signorielli, N. and Morgan, M. (1990) Cultivation Analysis. California: Sage.

Speers, T. (2001) Welcome or Over-reaction?: Refugees and Asylum Seekers in the Welsh Media. Wales Media
Forum.

van Dijk, T. (1991) ‘The interdisciplinary study of news as discourse’ , in K. B. Jensen, ed. , and N. Jankowski
(eds), Qualitative Methodologies for Mass Communication Research. London: Routledge.

van Dijk, T. (1988) News as Discourse. Hillsdale, NJ: Erlbaum.

Welch, R. ‘Polls, polls, and more polls: an evaluation of how public opinion polls are reported in newspapers’
Press/Politics vol. 7 no. (1) pp. 102–114. (2002)

Entry Citation:
Lewis, Justin. "Thinking by Numbers: Cultural Analysis and the Use of Data." The SAGE Handbook of Cultural Analysis. 2008. SAGE
Publications. 15 Oct. 2011. <http://www.sage-ereference.com/view/hdbk_culturanalysis/n31.xml>.
Chapter DOI: 10.4135/978-1-8486-0844-3.n31

© SAGE Publications, Inc.

Brought to you by: SRO Trial Account

http://www.sage-ereference.com/view/hdbk_culturanalysis/n31.xml Page 19 of 20
Thinking by Numbers: Cultural Analysis and the Use of Data : The SAGE Handbook of Cultural Analysis 15/10/11 15:48

http://www.sage-ereference.com/view/hdbk_culturanalysis/n31.xml Page 20 of 20

You might also like