Professional Documents
Culture Documents
R2.4 - Copied - The Uses and Misuses of Polls
R2.4 - Copied - The Uses and Misuses of Polls
R2.4 - Copied - The Uses and Misuses of Polls
http://dx.doi.org/10.4135/9781848607910.n22
[p. 232 ↓ ]
No matter what the issue is or the method for collecting attitudes about it, the mass
media have a critical role to play in conveying information about the nature of public
opinion to the majority of the population. This has historically been the case because
of the media's central location in the process of exchanging and communicating social
and political values as well as information. But the influence of news organizations
grew and became more important as they became producers as well as disseminators
of public opinion data. On the one hand, public opinion data are a means by which
the mass media can establish and maintain contact with their audience members, by
providing a conduit for the exchange of different perspectives and points of view as
well as an indication of how prevalent they are in society. News organizations always
provided these kinds of links through standard reportorial techniques such as the use
of quotations from sources or ‘man in the street’ interviews, as well as through letters to
the editor. But the advent of public polling organizations and the dissemination of their
findings through the media—and eventually the establishment of independent media
polling operations—provided a more scientific and systematic way to collect and present
such information (Herbst, 1993; → The News Media's Use of Opinion Polls).
[p. 233 ↓ ]
The range of inadequate data collection techniques is quite wide (→ Sampling). Most
commonly, it includes both data collected from biased or unrepresentative samples
as well as deliberate attempts to sway opinions through the use of biased question
wordings or orders. As examples of the former category, there are various forms of
biased, non-probability samples known as SLOP, or ‘self-selected listener opinion
polls,’ and CRAP, ‘computerized-response audience polls.’ In a probability sample,
every element in the population has a known, nonzero chance of being selected. Good
practice excludes the use of questionnaires inserted in newspapers or made available
on Web sites, for example, that permit anyone to answer (often as many times as they
like). Another bad practice is the representation of the views of participants in small
focus groups as the attitudes of the general public.
Under some circumstances there are also structural problems with the collection
of opinions, deriving from other time-saving necessities and conventions of public
pollsters. Most public polling data are collected through the use of forced choice (close-
ended) questions in which the respondent is offered only two alternatives, not counting
a possible option to express ‘no opinion.’ Using close-ended questions is generally
preferable to using open-ended questions where the responses have to be coded for
analysis. But this dual close end response option also conforms to the media's tendency
to report news in a dichotomous fashion (on the one hand, on the other hand) that often
oversimplifies the world. While adequate pretesting of questionnaires (including open
end questions) goes a long way toward ensuring that such questions are balanced and
cast in terms of the views that most respondents hold, the dual close end response
option nevertheless constrains the ways that some respondents can offer their opinions.
One difficulty with this technique is that the respondent must answer in the frame
that the polling organization offers. These forced choices may reflect the division of
preferences among elites, for example, in terms of their assessments of feasible policy
alternatives, but they also may constrain the public's ability to express their true range of
preferences.
This raises additional questions about the impact of the reporting of public opinion
on the audience. The dissemination of public opinion data clearly has an impact
on subsequent opinion and behavior. While many public pollsters are reluctant to
acknowledge this fact, there is growing evidence from a range of academic studies that
knowledge of what others think or believe—or how those opinions are changing—has
an effect on an individual's opinions and behavior (→ The Effects of Published Polls
on Citizens). These impacts are not necessarily negative, as there is usually a positive
version of each of the phenomenon that many critics see as negative; but the effects
are there nevertheless.
getting them on the policy agenda for elites. This is especially problematical when
a press release or research report describes an issue in conceptual terms, but the
underlying or supporting data present problems of operationalization that come from
biased question wording or question order effects. If journalists cannot distinguish
between the conceptual and operational issues, they may inadvertently present poll
data as facts when their provenance is questionable.
The organization Mothers Against Drunk Driving (MADD) has advocated laws to reduce
traffic accidents related to alcohol consumption in the United States. One of their
major thrusts has been to have federal and state laws passed to reduce the amount of
blood alcohol concentration (BAC) that is permissible when driving a car. In 1997, they
reported results from a survey that indicated that seven in ten Americans supported a
reduction in the allowable BAC from 0.10% to 0.08%, including this claim in testimony
before Congress. That result was based upon the following question, which is clearly
leading:
The reference to ‘scientific studies’ clearly gives the impression that 0.08 would be
a safer alternative. Using this as part of an effort to adopt the lower limit, MADD was
successful in their support of a federal law passed by Congress in 2000 that required
states to adopt the lower limit by 2004 in order to continue to receive federal highway
funds.
After the law took effect, a second survey they commissioned trumpeted the fact that
88% of licensed drivers said they supported the new federal law. However, the question
that was asked in this survey was:
support the law, support the law, oppose the law, or strongly oppose the
law.
The question wording does not refer to a federal law or its requirements. From the
response categories, it is unclear whether the respondents support the federal law or
the law in their state. The previous question in the survey asked the respondents if they
knew what the allowable legal BAC was in their state, but it did not tell them what it
was. In order for journalists to be able to decode these claims against the underlying
data, they have to know something about surveys and questionnaire design and how to
distinguish between the concept of ‘support for lowering the limit’ and the specific ways
that the concept was operationalized in a survey question.
Distinguishing a Difference
Almost all media reporting of poll data is based upon the results from a single cross-
sectional survey using one question at a time, often called ‘reporting the marginals.’ For
example, when a survey measures whether or not Americans approve or disapprove of
the job George W. Bush is doing as president, the resulting press release or news story
begins with an indication that 37% of those surveyed approve of the way he is handling
his job. A single proportion or percentage like that has a margin of error associated with
it due to sampling alone that is based upon the sample size. In a typical media poll with
a sample of 1,000 respondents, the statement is made that the ‘margin of error due
to sampling is [p. 235 ↓ ] plus or minus three percentage points.’ This means that we
would expect that 95 times out of 100, the population value for presidential approval lies
between 34% and 40% (37% ± 3%).
This has two important analytical consequences for a data analyst or journalist. The first
is the ability to state with confidence that more people disapprove of Bush's handling
of his job than approve, i.e. that 37% is statistically significantly different from 63%. A
second issue that preoccupies many analysts is the ability to make a statement that
a ‘majority of Americans’ disapprove of the president's handling of his job, in effect
a statement that 37% is statistically significantly different from 50%. This concern
about majorities comes from living in a democratic society where a commonly shared
assumption is that ‘the majority rules.’
While these issues seem quite straightforward, they may get complicated when either
the size of a sample or the differences in the proportions gets relatively small. When the
public is evenly divided on an issue, it will take quite a large sample to distinguish with
confidence a difference between 53% and 47%. And as the sample size decreases, the
margin of error increases; so relatively small differences that are statistically significant
in a sample of size 1,000 will not be in a sample of size 500. While journalists may
not frequently encounter samples sizes as small as 500, the size of subsamples
for men and women in a national sample of size 1,000 would normally approximate
that. So the confidence intervals around estimates in subsamples are larger than the
confidence intervals around estimates from the full sample. In general, it is a good idea
for journalists not to report on differences that are less than 5 percentage points when
sample sizes are in the range of 750 to 1,000 respondents; and the differences must be
greater for smaller subsample sizes.
A common source of error in describing opinion change comes from the ‘cross sectional
fallacy,’ in which comparisons are made in the distribution of opinions derived from
asking the same question of independent samples drawn from the same population
at two different points in time. The aggregate differences are assumed to represent
all of the change that has taken place among the individuals in the population being
sampled. If 40% of the sample supported Policy A in the first survey and 60% supported
the policy in the second survey, journalists often describe a 20-percentage point shift as
having taken place. But this almost certainly underestimates the total amount of shifting
opinion. Some people who supported the policy in the first survey may subsequently
have opposed it or become undecided; and there may have been offsetting shifts
among those who initially opposed [p. 236 ↓ ] the policy. The true assessment of
changes in opinion can only be made through the use of panel designs, in which the
same respondents are asked the same questions at two or more points in time (→
Panel Surveys).
It is also important that comparisons are made between the same operational definition
of a concept at multiple points in time. Sometimes analysts write in the language of
concepts, ignoring the fact that different operational definitions were used at different
points in time. It is no wonder, then, that journalists confuse methodological artifact
for change. For example, Smith (1987) reviewed the analysis of social and behavioral
science research reported by the Attorney General's Commission on Pornography
(1986) and found it seriously lacking. Public attitudes were a relevant and important
topic for consideration by the Commission because recent court cases suggested
the application of ‘contemporary community standards’ as an appropriate criterion for
assessing whether something is pornographic or not. And survey research seems the
obvious scientific method for measuring contemporary community standards in a valid
manner.
Among the many examples he cites from the report, Smith discusses a conclusion that
exposure to printed pornographic material had increased between 1970 and 1985. This
was based upon analysis of the responses to these two questions:
1970: ‘During the past year, have you seen or read a magazine which
you regarded as pornographic?’
One of the most frequent issues in reporting change has to do with tracking polls, those
daily measures of candidate standing that are taken near the end of a campaign. In
the last days of a campaign, reporters are often looking for any signs of tightening in a
contest, so the reporting usually focuses on change. However, there are methodological
issues associated with tracking polls that make reporting on change problematical. In
the first place, most tracking polls involve rolling cross-sections in which a small number
of interviews, typically ranging from 100 to 150, are taken every day; and the results are
combined across three days. As a consequence, the reported results are based upon
small samples ranging from 300 to 450 interviews.
A second conceptual issue is that the change from one ‘day's’ results to the next
involves counting some of the same respondents twice. That is, the data reported
on Monday will usually include combined responses from people interviewed on
Friday, Saturday, and Sunday. The results reported on Tuesday will include combined
For example, CBS News and the New York Times, based on their own poll, reported
on public evaluations of the impact of the Enron scandal quite differently than CNN
did using a specially commissioned Gallup poll. CBS produced a story on January 27,
2002 under the headline ‘Poll finds Enron's taint clings more to G.O.P. than Democrats.’
CNN/USA Today reported on January 27, 2002 the results from a Gallup Poll they
commissioned under the headline ‘Bush gets benefit of doubt from public in latest
poll.’ So was Enron hurting the Democrats or Republicans more? The answer lies in a
comparison of the specific questions asked.
From what you know so far, do you think executives of the Enron
Corporation had closer ties to the Republican Party or closer ties to the
Democratic Party?
When the CNN/USA Today released their results, they reported that Americans felt
that the Enron Corporation was involved with both the Republicans and Democrats in
Congress, according to the following three questions, but did not link Enron to members
of the Bush administration any more than the Democrats in Congress:
Americans generally do not pay a great deal of attention to corporate scandals; only
23% in the Gallup sample were following the story ‘very closely’ for example. So
this would be an instance of an issue rising rapidly to public visibility when citizens
are unlikely to have strongly held prior views. But what were readers and viewers of
these poll results to believe—did Americans hold the Republicans more liable than the
Democrats?
[p. 238 ↓ ]
During the ‘Reagan revolution,’ political scientists and historians, as well as political
activists and strategists, seriously debated whether the United States was undergoing
another partisan realignment, that is a fundamental and durable shift in the relative
distribution of people who identify themselves as Democrats and Republicans. This is a
biennial debate that occurs most frequently around national elections, and it has political
consequences for how the policy initiatives of the president and Congress are framed in
terms of their public support.
The basic data source that fuels the controversy is a time series of survey-based
estimates of party identification. In the United States, differences in partisanship have
long been observed between samples of successively more restricted populations
of adults: those who are registered to vote, those who indicate they are likely to vote
in a given election, and those who actually go to the polls become increasingly more
Republican. Borrelli, Lockerbie, and Niemi (1987) analyzed 51 poll results between
1980 and 1984 in evaluating three possible causes of these variations: (1) the relative
effects of the timing of the polls in relation to Election Day, (2) differences in the
population sampled, and (3) the wording of the partisanship question. Interviewing
close to Election Day was strongly related to the size of the relationship between voting
behavior and identification, either because respondents bring their identifications into
consonance with their expected votes or because they misinterpret the question as
inquiring about their intended votes. The main question wording difference involved
references to partisanship ‘as of today,’ ‘generally speaking’ or ‘usually,’ or ‘regardless
of how you may vote.’
The results showed that all of these predictors were statistically significant and operated
in the expected direction; and they explained a large proportion of the variance (78%)
in the difference in the proportion of identifiers with the two major parties. The purpose
of the analysis was not to corroborate whether or not a realignment was taking place
in this period, as measured by changes in basic attitudes toward the political parties.
Rather it was to alert analysts interested in this question (including journalists) to the
potential effect of methodological issues on substantive interpretations. The distribution
of partisanship is a function of who gets interviewed and when, and what questions
they are asked. It turns out that debates about realignment that take place around
election results reflect a peculiarity of the measurement of the very phenomenon itself in
conjunction with impending voting behavior. And post-election surveys showed that no
durable realignment occurred.
organizations, the availability [p. 239 ↓ ] of polling data related to the content increases
the likelihood that journalists will see the information as newsworthy and run the story.
Although many journalists may be familiar with sources of information about polls,
almost none are familiar with sources of other information they can use to corroborate
data from a poll they are evaluating. This information is available from data archives
that contain substantial holdings of public opinion data in various forms, as well as
some holdings on public opinion in many other countries (→ Archiving Poll Data). They
provide critical information that permits journalists to make independent assessments
of the reliability and validity of public opinion data collected or reported by others. Some
of these archives specialize in polls conducted by and for media organizations, while
others contain long-term trend data collections from academic survey organizations.
This information can come in any one of three different formats: data summaries in the
form of tables and charts from individual or similar polls or surveys; question databases
that provide aggregate results (frequency distributions) from topical questions used by
a variety of polling sources; and the actual computerized data files that allow users to
perform their own statistical manipulations. Access to most of these sources is available
on-line through the World Wide Web and the Internet, and through subscriptions to
services like Lexis-Nexis to which virtually every political reporter has access.
Conclusions
Consumers of poll results rely upon journalists to select and report poll results and
public opinion data in a way that provides accurate information and a context for
interpretation. Accurate information comes from good data that are appropriately
analyzed and interpreted. Context provides a basis for understanding that often
involves a comparison with other questions recently asked on the same topic, previous
administrations of the same question, or analysis of relevant subgroups in the sample.
Beyond the preparation of stories based upon the poll results, packages of stories can
be produced that relate the poll findings to interviews with or stories about ‘real people’
who hold the same views or behave in the same way.
References
Attorney General's Commission on Pornography (1986). Final report . Washington,
D.C.: Government Printing Office.
Borrelli, S.,, Lockerbie, B., &, and Niemi, R. G. Why the Democrat-Republican
partisanship gap varies from poll to poll. Public Opinion Quarterly, vol. 51, (1987). pp.
115–119.
Campbell, A., Converse, P. E., Miller, W. E., & Stokes, D. E. (1960). The American
voter . New York: John Wiley & Sons.
Herbst, S. (1993). Numbered voices: How opinion polling has shaped American
politics . Chicago: University of Chicago Press.
Smith, T. W. The use of public opinion data by the Attorney General's Commission on
Pornography. Public Opinion Quarterly, vol. 51, (1987). pp. 249–267.
Traugott, M. W., & Lavrakas, P. J. (2004). The voter's guide to election polls (3rd ed.).
Lanham MD: Rowman and Littlefield.
Traugott, M. W., & Powers, E. (2000). Did public opinion support the contract with
America? In P. J. Lavrakas, ed. & M. W. Traugott (Eds.), Election polls, the news media,
and democracy (pp. 93–110). New York: Seven Bridges Press.
http://dx.doi.org/10.4135/9781848607910.n22