R2.4 - Copied - The Uses and Misuses of Polls

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 17

The SAGE Handbook of

Public Opinion Research


21 The Uses and Misuses of Polls

Contributors: Michael W. Traugott


Editors: Wolfgang Donsbach & Michael W. Traugott
Book Title: The SAGE Handbook of Public Opinion Research
Chapter Title: "21 The Uses and Misuses of Polls"
Pub. Date: 2008
Access Date: November 23, 2015
Publishing Company: SAGE Publications Ltd
City: London
Print ISBN: 9781412911771
Online ISBN: 9781848607910
DOI: http://dx.doi.org/10.4135/9781848607910.n22
Print pages: 232-240
©2008 SAGE Publications Ltd. All Rights Reserved.
This PDF has been generated from SAGE Research Methods. Please note that the
pagination of the online version will vary from the pagination of the print book.
Pennsylvania State University
©2008 SAGE Publications Ltd. All Rights Reserved. SAGE Research Methods

http://dx.doi.org/10.4135/9781848607910.n22
[p. 232 ↓ ]

21 The Uses and Misuses of Polls


Michael W.Traugott

No matter what the issue is or the method for collecting attitudes about it, the mass
media have a critical role to play in conveying information about the nature of public
opinion to the majority of the population. This has historically been the case because
of the media's central location in the process of exchanging and communicating social
and political values as well as information. But the influence of news organizations
grew and became more important as they became producers as well as disseminators
of public opinion data. On the one hand, public opinion data are a means by which
the mass media can establish and maintain contact with their audience members, by
providing a conduit for the exchange of different perspectives and points of view as
well as an indication of how prevalent they are in society. News organizations always
provided these kinds of links through standard reportorial techniques such as the use
of quotations from sources or ‘man in the street’ interviews, as well as through letters to
the editor. But the advent of public polling organizations and the dissemination of their
findings through the media—and eventually the establishment of independent media
polling operations—provided a more scientific and systematic way to collect and present
such information (Herbst, 1993; → The News Media's Use of Opinion Polls).

The role of journalists as intermediaries in transmitting public opinion information to a


mass audience is critical, because the general public operates essentially on faith that
the information that they read or view or hear is accurate and reliable. At the same time
that people have a strong interest in what their fellow citizens think about important
issues of the day—or even about minor elements of current events—they are by and
large completely ignorant of the details of polling methodology. When they are told
what ‘the public thinks’ about a certain issue, they generally accept this statement
as fact. The vast majority of citizens do not have the skills to dissect and evaluate
such information and no way to form an independent judgment about its reliability and
validity, as there are few places to turn for such guidance (Traugott & Lavrakas, 2004).

Page 3 of 17 The SAGE Handbook of Public Opinion Research:


21 The Uses and Misuses of Polls
Pennsylvania State University
©2008 SAGE Publications Ltd. All Rights Reserved. SAGE Research Methods

[p. 233 ↓ ]

The Impact of Methodology


The foundation of good reporting on public opinion is good data. The key issue is that
accurate poll data rest upon: (1) probability samples that permit inferences back to the
underlying population; (2) well-written questionnaires that produce unbiased measures
of attitudes and behavior; (3) appropriate analysis; and (4) interpretations that do not
exceed the limitations of all of the forgoing elements.

The range of inadequate data collection techniques is quite wide (→ Sampling). Most
commonly, it includes both data collected from biased or unrepresentative samples
as well as deliberate attempts to sway opinions through the use of biased question
wordings or orders. As examples of the former category, there are various forms of
biased, non-probability samples known as SLOP, or ‘self-selected listener opinion
polls,’ and CRAP, ‘computerized-response audience polls.’ In a probability sample,
every element in the population has a known, nonzero chance of being selected. Good
practice excludes the use of questionnaires inserted in newspapers or made available
on Web sites, for example, that permit anyone to answer (often as many times as they
like). Another bad practice is the representation of the views of participants in small
focus groups as the attitudes of the general public.

Under some circumstances there are also structural problems with the collection
of opinions, deriving from other time-saving necessities and conventions of public
pollsters. Most public polling data are collected through the use of forced choice (close-
ended) questions in which the respondent is offered only two alternatives, not counting
a possible option to express ‘no opinion.’ Using close-ended questions is generally
preferable to using open-ended questions where the responses have to be coded for
analysis. But this dual close end response option also conforms to the media's tendency
to report news in a dichotomous fashion (on the one hand, on the other hand) that often
oversimplifies the world. While adequate pretesting of questionnaires (including open
end questions) goes a long way toward ensuring that such questions are balanced and
cast in terms of the views that most respondents hold, the dual close end response
option nevertheless constrains the ways that some respondents can offer their opinions.

Page 4 of 17 The SAGE Handbook of Public Opinion Research:


21 The Uses and Misuses of Polls
Pennsylvania State University
©2008 SAGE Publications Ltd. All Rights Reserved. SAGE Research Methods

One difficulty with this technique is that the respondent must answer in the frame
that the polling organization offers. These forced choices may reflect the division of
preferences among elites, for example, in terms of their assessments of feasible policy
alternatives, but they also may constrain the public's ability to express their true range of
preferences.

This raises additional questions about the impact of the reporting of public opinion
on the audience. The dissemination of public opinion data clearly has an impact
on subsequent opinion and behavior. While many public pollsters are reluctant to
acknowledge this fact, there is growing evidence from a range of academic studies that
knowledge of what others think or believe—or how those opinions are changing—has
an effect on an individual's opinions and behavior (→ The Effects of Published Polls
on Citizens). These impacts are not necessarily negative, as there is usually a positive
version of each of the phenomenon that many critics see as negative; but the effects
are there nevertheless.

Problems of Reporting on Public Opinion


Journalists face a number of problems when reporting on public opinion. Some come
from difficulties they have with statistical concepts; others come from a lack of training
in survey methods. An understanding of both issues is critical for deciding how to write
about anyone's data, even if they were collected by the news organization where the
journalist works. Many reporters are offered survey data by individuals or organizations
who believe that current measures of public opinion will increase the likelihood that
their point of view becomes newsworthy. In this case, reporters may face a special
problem: that of trying [p. 234 ↓ ] to validate the information in the same way that they
would check their facts in a more traditional story. In each instance, there are ways that
journalists can be encouraged to ‘get it right.’

Interest Groups that Want to Press a Point


Many interest groups believe that poll data that support their position will increase the
likelihood that news organizations will produce stories that cover their interests, thus

Page 5 of 17 The SAGE Handbook of Public Opinion Research:


21 The Uses and Misuses of Polls
Pennsylvania State University
©2008 SAGE Publications Ltd. All Rights Reserved. SAGE Research Methods

getting them on the policy agenda for elites. This is especially problematical when
a press release or research report describes an issue in conceptual terms, but the
underlying or supporting data present problems of operationalization that come from
biased question wording or question order effects. If journalists cannot distinguish
between the conceptual and operational issues, they may inadvertently present poll
data as facts when their provenance is questionable.

The organization Mothers Against Drunk Driving (MADD) has advocated laws to reduce
traffic accidents related to alcohol consumption in the United States. One of their
major thrusts has been to have federal and state laws passed to reduce the amount of
blood alcohol concentration (BAC) that is permissible when driving a car. In 1997, they
reported results from a survey that indicated that seven in ten Americans supported a
reduction in the allowable BAC from 0.10% to 0.08%, including this claim in testimony
before Congress. That result was based upon the following question, which is clearly
leading:

Today, most states define intoxicated driving at 0.10% blood alcohol


content, yet scientific studies show that virtually all safe driving skills
are impaired at 0.08. Would you be in favor of lowering the legal blood
alcohol limit for drivers to 0.08?

The reference to ‘scientific studies’ clearly gives the impression that 0.08 would be
a safer alternative. Using this as part of an effort to adopt the lower limit, MADD was
successful in their support of a federal law passed by Congress in 2000 that required
states to adopt the lower limit by 2004 in order to continue to receive federal highway
funds.

After the law took effect, a second survey they commissioned trumpeted the fact that
88% of licensed drivers said they supported the new federal law. However, the question
that was asked in this survey was:

According to the National Highway Traffic Safety Administration, 0.08%


blood alcohol concentration is the illegal drunk driving limit in all 50
states and the District of Columbia. Please tell me if you strongly

Page 6 of 17 The SAGE Handbook of Public Opinion Research:


21 The Uses and Misuses of Polls
Pennsylvania State University
©2008 SAGE Publications Ltd. All Rights Reserved. SAGE Research Methods

support the law, support the law, oppose the law, or strongly oppose the
law.

The question wording does not refer to a federal law or its requirements. From the
response categories, it is unclear whether the respondents support the federal law or
the law in their state. The previous question in the survey asked the respondents if they
knew what the allowable legal BAC was in their state, but it did not tell them what it
was. In order for journalists to be able to decode these claims against the underlying
data, they have to know something about surveys and questionnaire design and how to
distinguish between the concept of ‘support for lowering the limit’ and the specific ways
that the concept was operationalized in a survey question.

Distinguishing a Difference
Almost all media reporting of poll data is based upon the results from a single cross-
sectional survey using one question at a time, often called ‘reporting the marginals.’ For
example, when a survey measures whether or not Americans approve or disapprove of
the job George W. Bush is doing as president, the resulting press release or news story
begins with an indication that 37% of those surveyed approve of the way he is handling
his job. A single proportion or percentage like that has a margin of error associated with
it due to sampling alone that is based upon the sample size. In a typical media poll with
a sample of 1,000 respondents, the statement is made that the ‘margin of error due
to sampling is [p. 235 ↓ ] plus or minus three percentage points.’ This means that we
would expect that 95 times out of 100, the population value for presidential approval lies
between 34% and 40% (37% ± 3%).

This has two important analytical consequences for a data analyst or journalist. The first
is the ability to state with confidence that more people disapprove of Bush's handling
of his job than approve, i.e. that 37% is statistically significantly different from 63%. A
second issue that preoccupies many analysts is the ability to make a statement that
a ‘majority of Americans’ disapprove of the president's handling of his job, in effect
a statement that 37% is statistically significantly different from 50%. This concern
about majorities comes from living in a democratic society where a commonly shared
assumption is that ‘the majority rules.’

Page 7 of 17 The SAGE Handbook of Public Opinion Research:


21 The Uses and Misuses of Polls
Pennsylvania State University
©2008 SAGE Publications Ltd. All Rights Reserved. SAGE Research Methods

While these issues seem quite straightforward, they may get complicated when either
the size of a sample or the differences in the proportions gets relatively small. When the
public is evenly divided on an issue, it will take quite a large sample to distinguish with
confidence a difference between 53% and 47%. And as the sample size decreases, the
margin of error increases; so relatively small differences that are statistically significant
in a sample of size 1,000 will not be in a sample of size 500. While journalists may
not frequently encounter samples sizes as small as 500, the size of subsamples
for men and women in a national sample of size 1,000 would normally approximate
that. So the confidence intervals around estimates in subsamples are larger than the
confidence intervals around estimates from the full sample. In general, it is a good idea
for journalists not to report on differences that are less than 5 percentage points when
sample sizes are in the range of 750 to 1,000 respondents; and the differences must be
greater for smaller subsample sizes.

The Problem of Reporting on Change


The extension of distinguishing a simple difference in a single survey is trying to
distinguish whether a proportion or percentage is different when the same question is
asked in two different surveys. Journalists frequently have a problem interpreting survey
results when they involve such a description of change. One reason is they cannot
distinguish between different types and causes of ‘change.’ Some are methodological
artifacts of measurement or differences in the conceptualization of change. Others
come from aggregation effects that make opinions appear to be more stable than they
really are, because of counterbalancing trends. Some reports suggest that everyone
has changed a little, when in fact the change has been localized in specific subgroups in
the population. And sometimes the apparent lack of change in the entire sample masks
significant but compensating changes in subgroups. For example, the distribution of
party identification in the United States has remained virtually unchanged since 1952.
But this apparent stability was the result of a growing Democratic allegiance among
Blacks, offset by a movement toward the Republican Party among southern whites,
as well as a movement of men toward the Republican Party and women toward the
Democrats.

Page 8 of 17 The SAGE Handbook of Public Opinion Research:


21 The Uses and Misuses of Polls
Pennsylvania State University
©2008 SAGE Publications Ltd. All Rights Reserved. SAGE Research Methods

A common source of error in describing opinion change comes from the ‘cross sectional
fallacy,’ in which comparisons are made in the distribution of opinions derived from
asking the same question of independent samples drawn from the same population
at two different points in time. The aggregate differences are assumed to represent
all of the change that has taken place among the individuals in the population being
sampled. If 40% of the sample supported Policy A in the first survey and 60% supported
the policy in the second survey, journalists often describe a 20-percentage point shift as
having taken place. But this almost certainly underestimates the total amount of shifting
opinion. Some people who supported the policy in the first survey may subsequently
have opposed it or become undecided; and there may have been offsetting shifts
among those who initially opposed [p. 236 ↓ ] the policy. The true assessment of
changes in opinion can only be made through the use of panel designs, in which the
same respondents are asked the same questions at two or more points in time (→
Panel Surveys).

It is also important that comparisons are made between the same operational definition
of a concept at multiple points in time. Sometimes analysts write in the language of
concepts, ignoring the fact that different operational definitions were used at different
points in time. It is no wonder, then, that journalists confuse methodological artifact
for change. For example, Smith (1987) reviewed the analysis of social and behavioral
science research reported by the Attorney General's Commission on Pornography
(1986) and found it seriously lacking. Public attitudes were a relevant and important
topic for consideration by the Commission because recent court cases suggested
the application of ‘contemporary community standards’ as an appropriate criterion for
assessing whether something is pornographic or not. And survey research seems the
obvious scientific method for measuring contemporary community standards in a valid
manner.

Among the many examples he cites from the report, Smith discusses a conclusion that
exposure to printed pornographic material had increased between 1970 and 1985. This
was based upon analysis of the responses to these two questions:

1970: ‘During the past year, have you seen or read a magazine which
you regarded as pornographic?’

Page 9 of 17 The SAGE Handbook of Public Opinion Research:


21 The Uses and Misuses of Polls
Pennsylvania State University
©2008 SAGE Publications Ltd. All Rights Reserved. SAGE Research Methods

1985: ‘Have you ever read Playboy magazine or Penthouse magazine?’

The Commission concluded that exposure to sexually explicit magazines increased


substantially across this fifteen-year period because 20% reported seeing or reading a
pornographic magazine in 1970 while ‘(i)n contrast, two-thirds of the 1985 respondents
had read Playboy or Penthouse at some time’ (p. 913). Even though the report hedges
on the interpretation of this difference, the point is that this interpretation should never
have been made at all.

A common problem in designing questions is that of setting the time frame as a


reference for a respondent, and differences in phraseology often lead to different
levels of reported behavior. The earlier question asked for a report of a behavior that
might have occurred ‘during the past year,’ while the time reference for the 1985
question was ‘ever.’ More importantly, the first question left the interpretation of what
constitutes ‘pornographic’ up to the respondent; and there certainly would be a wide
variation in associated meanings, including whether the respondents had either Playboy
or Penthouse in mind when they answered. In the second question, the concept of
‘pornography’ was operationalized by referring to two specific magazines. As a result
of these problems, it is impossible to know if there is any reasonable interpretation of
observed differences over time at all.

One of the most frequent issues in reporting change has to do with tracking polls, those
daily measures of candidate standing that are taken near the end of a campaign. In
the last days of a campaign, reporters are often looking for any signs of tightening in a
contest, so the reporting usually focuses on change. However, there are methodological
issues associated with tracking polls that make reporting on change problematical. In
the first place, most tracking polls involve rolling cross-sections in which a small number
of interviews, typically ranging from 100 to 150, are taken every day; and the results are
combined across three days. As a consequence, the reported results are based upon
small samples ranging from 300 to 450 interviews.

A second conceptual issue is that the change from one ‘day's’ results to the next
involves counting some of the same respondents twice. That is, the data reported
on Monday will usually include combined responses from people interviewed on
Friday, Saturday, and Sunday. The results reported on Tuesday will include combined

Page 10 of 17 The SAGE Handbook of Public Opinion Research:


21 The Uses and Misuses of Polls
Pennsylvania State University
©2008 SAGE Publications Ltd. All Rights Reserved. SAGE Research Methods

responses from people interviewed on Saturday, Sunday, and Monday. So two-thirds


of the data are the same, while all of the difference is essentially [p. 237 ↓ ] due to
the exclusion of the Friday respondents and the addition of the Monday respondents.
The one-day sample sizes are even smaller than the combined total, with even larger
margins of error to account for. Almost any difference that appears will be statistically
insignificant.

Conflicting Reports of Public Opinion


On many issues, the public begins with no strongly held view or preconceived notion of
what appropriate public policy might be. While forming their own views, many citizens
scan the opinion horizon to see what others think as a reference. This is one of the
important ways that dissemination of poll results educates the public, and they in
turn rely upon accurate reporting. But evaluating the quality of data obtained in polls
and surveys presents a special problem for journalists, since their formal training in
interpreting poll results is generally weak and inadequate. There are even inadvertent
instances where media polls have reported diametrically opposed ‘findings’—sometimes
even from separate polls released at about the same time. Most commonly this occurs
as an artifact of question wording. It can create complicated interpretive problems
for readers and viewers who might want to assess their fellow citizens' opinions by
comparing poll results on the same topic.

For example, CBS News and the New York Times, based on their own poll, reported
on public evaluations of the impact of the Enron scandal quite differently than CNN
did using a specially commissioned Gallup poll. CBS produced a story on January 27,
2002 under the headline ‘Poll finds Enron's taint clings more to G.O.P. than Democrats.’
CNN/USA Today reported on January 27, 2002 the results from a Gallup Poll they
commissioned under the headline ‘Bush gets benefit of doubt from public in latest
poll.’ So was Enron hurting the Democrats or Republicans more? The answer lies in a
comparison of the specific questions asked.

The CBS News/New York Times question and marginals were:

Page 11 of 17 The SAGE Handbook of Public Opinion Research:


21 The Uses and Misuses of Polls
Pennsylvania State University
©2008 SAGE Publications Ltd. All Rights Reserved. SAGE Research Methods

From what you know so far, do you think executives of the Enron
Corporation had closer ties to the Republican Party or closer ties to the
Democratic Party?

Republican Party 45%


Democratic Party 10%
Both equally (volunteered) 10%
Don't know 34%
From this single question, the suggestion is that Americans thought the Enron
executives were closer to the Republicans than the Democrats.

When the CNN/USA Today released their results, they reported that Americans felt
that the Enron Corporation was involved with both the Republicans and Democrats in
Congress, according to the following three questions, but did not link Enron to members
of the Bush administration any more than the Democrats in Congress:

Which of the following statements best describes your view of the


Republicans' in Congress/Democrats' in Congress/members of the
Bush administration's involvement with the Enron corporation?

Republicans in Democrats in Members of Bush


Congress Congress Administration
Did something 33% 16% 15%
illegal
Did something 41% 35% 32%
unethical but
nothing illegal
Did not do anything 30% 18% 28%
seriously wrong
Don't know/No 16% 31% 25%
opinion

Page 12 of 17 The SAGE Handbook of Public Opinion Research:


21 The Uses and Misuses of Polls
Pennsylvania State University
©2008 SAGE Publications Ltd. All Rights Reserved. SAGE Research Methods

Americans generally do not pay a great deal of attention to corporate scandals; only
23% in the Gallup sample were following the story ‘very closely’ for example. So
this would be an instance of an issue rising rapidly to public visibility when citizens
are unlikely to have strongly held prior views. But what were readers and viewers of
these poll results to believe—did Americans hold the Republicans more liable than the
Democrats?

[p. 238 ↓ ]

Partisanship and the Republican Revolution


Party identification is one of the most fundamental concepts underlying the analysis
of political phenomena in the United States and elsewhere. Developed originally by
political psychologists as a measure of the voter's ‘affective orientation to an important
group-object in his environment’ (Campbell, Converse, Miller, & Stokes, 1960, p. 121),
from the early 1950s it has been viewed as the most important predictor of candidate
preference and voter choice among individual voters. At the same time, distinctions
are made between this attitudinal predisposition to identify with a party, which can
only be measured through personal interviews, and various behavioral measures of
partisanship such as voting, which can be measured with election returns as well as
surveys. The distributions of partisans and party-based voting behavior in any political
system are two of its most important properties.

During the ‘Reagan revolution,’ political scientists and historians, as well as political
activists and strategists, seriously debated whether the United States was undergoing
another partisan realignment, that is a fundamental and durable shift in the relative
distribution of people who identify themselves as Democrats and Republicans. This is a
biennial debate that occurs most frequently around national elections, and it has political
consequences for how the policy initiatives of the president and Congress are framed in
terms of their public support.

The basic data source that fuels the controversy is a time series of survey-based
estimates of party identification. In the United States, differences in partisanship have
long been observed between samples of successively more restricted populations

Page 13 of 17 The SAGE Handbook of Public Opinion Research:


21 The Uses and Misuses of Polls
Pennsylvania State University
©2008 SAGE Publications Ltd. All Rights Reserved. SAGE Research Methods

of adults: those who are registered to vote, those who indicate they are likely to vote
in a given election, and those who actually go to the polls become increasingly more
Republican. Borrelli, Lockerbie, and Niemi (1987) analyzed 51 poll results between
1980 and 1984 in evaluating three possible causes of these variations: (1) the relative
effects of the timing of the polls in relation to Election Day, (2) differences in the
population sampled, and (3) the wording of the partisanship question. Interviewing
close to Election Day was strongly related to the size of the relationship between voting
behavior and identification, either because respondents bring their identifications into
consonance with their expected votes or because they misinterpret the question as
inquiring about their intended votes. The main question wording difference involved
references to partisanship ‘as of today,’ ‘generally speaking’ or ‘usually,’ or ‘regardless
of how you may vote.’

The results showed that all of these predictors were statistically significant and operated
in the expected direction; and they explained a large proportion of the variance (78%)
in the difference in the proportion of identifiers with the two major parties. The purpose
of the analysis was not to corroborate whether or not a realignment was taking place
in this period, as measured by changes in basic attitudes toward the political parties.
Rather it was to alert analysts interested in this question (including journalists) to the
potential effect of methodological issues on substantive interpretations. The distribution
of partisanship is a function of who gets interviewed and when, and what questions
they are asked. It turns out that debates about realignment that take place around
election results reflect a peculiarity of the measurement of the very phenomenon itself in
conjunction with impending voting behavior. And post-election surveys showed that no
durable realignment occurred.

Evaluating Data Quality


Most of the poll-based reporting about politics, in the United States and elsewhere,
consists of stories organized around surveys that news organizations commission or
conduct. However, some stories are offered to news organizations by interest groups
or individuals because they believe that public consumption of the information they
want disseminated will be enhanced by the credibility of a source like a newspaper
or network evening news show. When such stories are ‘shopped around’ to news

Page 14 of 17 The SAGE Handbook of Public Opinion Research:


21 The Uses and Misuses of Polls
Pennsylvania State University
©2008 SAGE Publications Ltd. All Rights Reserved. SAGE Research Methods

organizations, the availability [p. 239 ↓ ] of polling data related to the content increases
the likelihood that journalists will see the information as newsworthy and run the story.

An interesting example of such a strategy occurred with the Republican Party's


development of the ‘Contract with America,’ an organizing device for their 1994
congressional campaign (Traugott & Powers, 2000). Republican officials and strategists
designed the Contract as a unifying theme for the fall campaign in an attempt to
nationalize their effort to gain control of the US House of Representatives. At the roll
out of the Contract, they promoted it to journalists with the claim that each of its 10
‘reforms’ was supported by at least 60% of the American public. Although this claim
was widely reported in the media across the entire campaign period, it was not true.
This episode provides an interesting case study of how political strategists can take
advantage of unwary and untrained journalists in order to frame a campaign by invoking
public support for their agenda through (alleged or implied) polling data (→ The Use of
Surveys by Governments and Politicians; → The Use of Voter Research in Campaigns).

Although many journalists may be familiar with sources of information about polls,
almost none are familiar with sources of other information they can use to corroborate
data from a poll they are evaluating. This information is available from data archives
that contain substantial holdings of public opinion data in various forms, as well as
some holdings on public opinion in many other countries (→ Archiving Poll Data). They
provide critical information that permits journalists to make independent assessments
of the reliability and validity of public opinion data collected or reported by others. Some
of these archives specialize in polls conducted by and for media organizations, while
others contain long-term trend data collections from academic survey organizations.
This information can come in any one of three different formats: data summaries in the
form of tables and charts from individual or similar polls or surveys; question databases
that provide aggregate results (frequency distributions) from topical questions used by
a variety of polling sources; and the actual computerized data files that allow users to
perform their own statistical manipulations. Access to most of these sources is available
on-line through the World Wide Web and the Internet, and through subscriptions to
services like Lexis-Nexis to which virtually every political reporter has access.

Page 15 of 17 The SAGE Handbook of Public Opinion Research:


21 The Uses and Misuses of Polls
Pennsylvania State University
©2008 SAGE Publications Ltd. All Rights Reserved. SAGE Research Methods

Conclusions
Consumers of poll results rely upon journalists to select and report poll results and
public opinion data in a way that provides accurate information and a context for
interpretation. Accurate information comes from good data that are appropriately
analyzed and interpreted. Context provides a basis for understanding that often
involves a comparison with other questions recently asked on the same topic, previous
administrations of the same question, or analysis of relevant subgroups in the sample.
Beyond the preparation of stories based upon the poll results, packages of stories can
be produced that relate the poll findings to interviews with or stories about ‘real people’
who hold the same views or behave in the same way.

References
Attorney General's Commission on Pornography (1986). Final report . Washington,
D.C.: Government Printing Office.

Borrelli, S.,, Lockerbie, B., &, and Niemi, R. G. Why the Democrat-Republican
partisanship gap varies from poll to poll. Public Opinion Quarterly, vol. 51, (1987). pp.
115–119.

Campbell, A., Converse, P. E., Miller, W. E., & Stokes, D. E. (1960). The American
voter . New York: John Wiley & Sons.

Herbst, S. (1993). Numbered voices: How opinion polling has shaped American
politics . Chicago: University of Chicago Press.

Smith, T. W. The use of public opinion data by the Attorney General's Commission on
Pornography. Public Opinion Quarterly, vol. 51, (1987). pp. 249–267.

Traugott, M. W., & Lavrakas, P. J. (2004). The voter's guide to election polls (3rd ed.).
Lanham MD: Rowman and Littlefield.

Page 16 of 17 The SAGE Handbook of Public Opinion Research:


21 The Uses and Misuses of Polls
Pennsylvania State University
©2008 SAGE Publications Ltd. All Rights Reserved. SAGE Research Methods

Traugott, M. W., & Powers, E. (2000). Did public opinion support the contract with
America? In P. J. Lavrakas, ed. & M. W. Traugott (Eds.), Election polls, the news media,
and democracy (pp. 93–110). New York: Seven Bridges Press.

http://dx.doi.org/10.4135/9781848607910.n22

Page 17 of 17 The SAGE Handbook of Public Opinion Research:


21 The Uses and Misuses of Polls

You might also like