Polls and Public Opinion - Comentarii Pe Bloguri

You might also like

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 7

Polls and Public Opinion

Put simply, public opinion is what the public thinks, and a poll is a means for learning those views. But this definition masks substantial complexity. There is much debate about what the public is, what might constitute its thoughts, what polls actually measure, and what the import of expressed opinions may be.

What Is the Public?


Consider some images: a crowd at a baseball game, protesters marching with signs and banners, authors of letters to a newspaper, strikers on a picket line, parents at a school board meeting, supporters at an election rally, citizens of a country, members of a special-interest organization (like the ACLU), commentators on a blog, a thousand adults interviewed for a Gallup Poll. In political and social psychologist Floyd Allport's (inelegant) language, these are multi-individual situations in which people may express themselves. But they differ in significant ways. The public is displayed variously as a disconnected assemblagea massor a unified group with a common purpose (e.g., crowd, spectators vs. campaign supporters, protesters). It is a group whose existence depends on a particular event (e.g., spectators, meeting attendees, strikers), or individuals whose connection spans space and time, such as members of a special-interest organization. Some publics are formed by participants' deliberate action (e.g., attending a meeting) and others depend upon external agency (a polling firm gathering interviews). In short, the public is portrayed variously as more or less allied, more or less inclusive, more or less transitory, more or less volitional. The modes and quality of communication also vary widely. People express themselves in cacophonous or melodic tones, in concert with others or alone, spontaneously or prompted, openly or anonymously, with or without much knowledge and consideration. These varied images underlie conflicting views about polls and public opinion. Those who advocate the use of polls envision an inclusive process that goes beyond specific situations or involved publics. The barriers to membership are few; one need not demonstrate any particular knowledge or interest, nor act in support of beliefs, nor join with others to affect policy. To be a member of the public, one need only agree to express confidential opinions on questions put by the polling organization. By contrast, those who are opposed to polling have a more exclusive, issue-specific view of the public. Membership costs are higher: belonging depends on the degree of one's engagement with a particular policy question, one's association with others whose level of engagement is also high, and one's willingness to be identified as taking a side and acting on belief. According to this view, polls produce top-down manufactured publics, not genuine manifestations of public opinion. But for George Gallup and other polling pioneers, polls actually empower individual citizens who otherwise would find it difficult to be heard. Because pollsters seek a cross-section of society to interview, the advantages of class or education or special interest that ordinarily facilitate political participation are overcome by polls. What, then, is the real, genuine public? An often-cited exchange at the American Sociological Association meetings in the late 1940s summarizes the argument. Two eminent social scientists on this panelHerbert Blumer and Theodore Newcomb represented the two schools of thought just described. Blumer first attacked the idea of polling because, he argued, public opinion is rooted in interest groups that are not captured in cross-section samples. For Blumer, and many theorists who continue to cite his argument, polls construct an artificial

public by aggregating the views of randomly sampled individuals. Thus, Blumer argued, polls are blind to the empirical reality of the organization of the public in social ties among members. Newcomb's reply is less often cited. His principle point was that polling is not incompatible with studying public opinion that is based in social organizations. Cross-section surveys can recruit respondents who are group members and examine their views in comparison to others. Polls can inquire into the social sources of individual opinions. Polls can examine how different publics knowledgeable, ignorant, involved, disengaged, socially connected, isolateddiffer in their opinion expression. Blumer was mistaken in conflating the way polls had been employed by some practitioners and the ways in which they could be utilized. In sum, Newcomb argued that Blumer's artificial public analysis was a straw man. Polls can, at once, give a picture of what a cross-section of society thinks about an issue and also examine how group affiliations shape opinion. It is difficult to determine which man won the debate, however. The years since their exchange have seen both frequent return to Blumer in scholarly critiques of the polling enterprise, and marked growth in the number and types of polls. The public constructed through polls has become a mainstay of modern discourse and a continuing source of academic criticism. But we do not have to choose between Blumer's and Gallup's vision of the public. Rather, we need to recognize that there are multiple publics: the egalitarian public constructed by polls; the hierarchical, organic, interest group public; and others. Harwood Childs noted that public opinion is not the name of a something, but a classification of a number of somethings. Debates over the true public are rooted not in some empirical criteria, but in normative views that give more or less weight to ideals of egalitarianism, inclusive-ness, degree of group association, engagement, behavioral manifestation, and so forth.

What Is the Public Thinking?


The opinion part of public opinion involves both descriptive and normative components as well. It refers to perceptions and beliefs about what is happening in the world (the pictures in our heads described by American journalist Walter Lippmann) and to judgments related to those cognitions. Opinions frequently are verbal expressions. In addition to responses to poll questions, opinions can be conveyed in many ways, including contacts with elected representatives or customer service agents and letters and comments to media outlets. The Internet has opened up many methods of opinion expression, including self-publication in websites and blogs, and opportunities to review and comment on everything from commercial transactions to Sunday sermons. of course, elections are a form of opinion expression. Opinions might even be inferred, with less precision, from the collective action of crowds. In the polling world, opinions were originally thought to be verbal expressions of firm, underlying attitudes and values. More recently, they have been theorized as spontaneously constructed responses to opinion questions, sampled from top of mind cognitions and feelings. This divergence follows decades of research on the knowledge basis for public opinion, the extent to which opinions reflect political ideology, and the connection between opinion and behavior. Philip Converse's research has shaped discourse on what the public is thinking for the past 50 years. In The American Voter and seminal papers on public knowledge, Converse noted that the average level of American citizens' political knowledge is low and the variance is high. A few members of the public possess a great deal of information while the great majority have very little. The amount of knowledge a person possesses has

implications for how opinions relate to one another and how they change. Converse found that knowledgeable people are more likely to hold opinions that are organized in an ideological framework. Most citizens do not look at political issues through an ideological lens. Individual opinions also do not change in ways that one would expect. Looking at panel data from three waves of the National Election Study (the basis for The American Voter), Converse demonstrated that stasis or slow change in aggregate opinion about public policy from 1956 to 1960 masked remarkable individual level shifts. Individual shifts virtually cancelled one another, leading to the impression that there was not much change on the whole. Converse's findings implied that it was wrong-headed to look for sophisticated understanding in most individuals' opinions. This view is supported by methodological research in polls that shows that measured opinion is susceptible to minor changes in question wording and structure, as well as research that finds that a notable number of poll respondents will give opinions about imaginary or virtually unknown policies and proposals. Scholars who differ with Converse question the criteria he used to judge respondent rationality or focus not on characteristics of individual opinion but on how opinion looks in the aggregate. In the first category, some have argued that people do not need a store of political knowledge or an ideological stance to participate effectively in politics. If they follow heuristics provided by organizations (political parties and interest groups), they can form ideas about which policies to support or oppose. In the second group, scholars have found meaningful patterns of opinion change and effect on policy when examining aggregate opinion over time. Such findings are discussed in more detail below. There has also been an effort to enhance the quality of opinion in the mass public. Led by nongovernment organizations such as Public Agenda and by news organizations, groups of citizens have been recruited to participate in extensive briefings and discussion on important issues so that they have the information to express knowledgeable views. National and local issue conventions have been held in both the United States and Great Britain. In the most elaborate form of these gatherings, put forward as alternatives to public opinion polls, organizers bring together in one location a large group of citizens for lectures on public issues and deliberation about them. Participants' opinions are measured prior to and after the convention, with the intent of showing how informed opinion differs from the usual findings produced by polls. Factual information and deliberation are meant to convert raw, malleable opinions into more sound judgments. The participants are selected through probability telephone sampling (thus seeking the ability to generalize the convention results to the larger population) and provided funds to attend the convention. News organizations have sponsored issue conventions as a way to enhance coverage and to involve audiences. For a time in the 1990s, such events were often the centerpiece of civic or public journalism programs adopted by a number of news organizations around the United States. (Civic journalism programs were efforts by news organizations to identify issues of importance to readers, to report extensively on those matters, and to feed back the information to the audience.) But these efforts have not supplanted public opinion polls. They did not achieve consistently noteworthy change in the quality of opinions expressed by participants. The cost of issues conventions, their limited issue scope, and the lack of compelling evidence for positive effects have combined to reduce their prevalence.

What Do Polls Measure?


Polls measure opinions concerning matters that poll sponsors judge to be interesting and important. They do not measure opinions on other issues that may be significant but do not

capture the attention of sponsors. Since many polls are funded by news organizations, the topics covered in questionnaires frequently deal with issues that are currently in the news. Polling organizations often follow these topics over time to track the rise and fall of attention to news topics. There has been a symbiosis of polls and news organizations since before the advent of scientific polling. Newspapers conducted straw polls of readers prior to elections in the nineteenth century. The Literary Digest sent pre-election questionnaires to subscribers and nonsubscribers early in the twentieth century. George Gallup and other early pollsters syndicated their information to newspaper clients, beginning in the 1930s. A few major news organizations founded in-house polling units later in the century, while others contracted with polling firms for exclusive access to data. Several factors appear to underlie the easy match between public opinion research and news organizations. Obviously, public opinion is a form of news. Reporting on public views of current events is one of the functions of news media in a democracy. News executives, in addition, have long recognized the appeal of opinion news to consumers. Finally, the act of polling itself is seen as a way to engage readers or viewers in the content offered by the news organization. One can see polls of the day or even polls on the subject of particular television programs. One of the most popular American entertainment television programs (American Idol) involves audience voting. The advent of communication media including email and text messaging permits synchronous polling of the audience. In the category of opinion news, pre-election polls are one of the most prominent and controversial examples. George Gallup made his reputation and created the current genre of pre-election polls with his work in the 1936 election. Betting his newspaper clients that his results would more closely match the election results than those of the Literary Digest, whose publication of poll predictions had been the previous gold standard, Gallup predicted the landslide Roosevelt victory while the Digest picked the loser Alf Landon. Gallup's win led to the adoption of scientific sampling methods for subsequent elections. The science involved was probability (or near probability) sampling. The Digest had failed because it did not have a method of sampling prospective voters, allocating to each a chance of selection. Instead, it relied on attracting huge numbers of responses to its mail poll. Gallup and other pollsters (Archibald Crossley, Elmo Roper) demonstrated that a much smaller sample, appropriately selected, would represent the population of prospective voters better. After this early success, it has become common for new pollsters to prove their mettle in preelection polls. A track record of calling elections correctly has established the bona fides of polling firms, which, in turn, has led to further political and commercial business. And subsequent preelection polling failures (e.g., in 1948 and 1980) have led to doubts about the entire survey enterprise. The importance of the high profile preelection poll to the field justifies a closer look at its workings. Several major issues confront those who would carry out a pre-election poll. Begin with the selection of the sample: pollsters want to find out the preferences of people who are going to vote in the upcoming election. This means reaching beyond the usual eligibility criterion for surveys adulthoodto engage people whose political history and election interest suggest that they are likely to vote. Estimating voting likelihood is fraught with difficulty, particularly in elections which may attract a new, young cohort of voters. In the 2008 New Hampshire Democratic Presidential Primary, the Gallup pre-election poll, like others, miscalled the race in favor of Barack Obama (Hillary Clinton actually won). In postelection analysis, Gallup reported that its likely voter model was a prime reason for the miscue: the raw vote intention

data Gallup collected were closer to the election result than were the data weighted by likelihood of turnout. In addition to likely voter estimation, the preelection poll must deal with the possibility of shifting preferences as the campaign up to the election plays out. Pre-primary election polls have a riskier environment because, with all candidates seeking to represent the same political party, the effect of the prospective voter's party allegiance on preferences is nullified. The pollster cannot use party identification to predict vote choice. In all campaigns there is the possibility of a shift in voter preferences just as the campaign comes to an end. The timing of the final poll before the election can, therefore, make a big difference in how well the poll estimates the outcome. Pollsters commonly do not predict election outcomes until they have done the last poll of the contest, and, even then, not if the race is tight. The pressure on the final poll results is exacerbated by the speed with which they must be assembled. While other sorts of surveys have the luxury of call-backs to prospective respondents who are not contacted on the first or second try, final pre-election polls, seeking the very latest breakdown of vote intentions, must forego repeated attempts to reach the difficult-to-reach respondents. This means that the final vote intention estimates may be biased if the hard-to-contact prospective voters are numerous and if they differ in their preferences from those who could be recruited for interviews. The threat of nonresponse bias, an increasingly important concern for all surveys, is that much greater in an environment in which attempts to reach nonrespondents are severely limited. The pollster relies upon post-survey adjustments (weights) to correct for demographic imbalances in the achieved sample. This solution may or may not suffice to compensate for the missing respondents. Pollsters working on pre-election surveys also have the common problem of measuring respondent preferences. But much more is riding on the accuracy of preference measurement in the pre-election poll than in the commercial survey whose results never become public. It is necessary to try to nail down prospective voters' preferences, even when those inclinations may be malleable. Pollsters need to establish rules for dealing with those prospective voters who lean toward a particular candidate but have no more of a preference. They must also deal with those who report that they are undecided for whom they will vote. Some pollsters favor dropping these respondents from the analysis while others insist that undecideds in races involving an incumbent candidate should be allocated to the challenger (on the view that undecided voters in those races are holdouts against the incumbent, about whom they must know a good deal). Considering the significant problems confronting pre-election pollsters, their track record is quite good. Periodic reviews of poll performance suggest that there are far more close fits between poll estimates and vote outcomes than spectacular failures. (The failuresviz. the Chicago Daily Tribune's 1948 headline: Dewey Defeats Trumanlive longer in memory). Because of problems plaguing the entire survey industry, most notably growing nonresponse, it is not clear if the established track record will be maintained in elections to come. Whether so or not, pre-election polls have established another sort of prominence in journalistic treatments of elections and in commentary on contemporary politics. In 2004, in line with increasingly bitter partisanship in American politics, media polls came under attack from both the Left and Right for pre-election estimates unfavorable to the candidate of choice. The complaint that news coverage of elections is dominated by the horse race has been with us for several decades. In the latter 1970s and 1980s this complaint appeared to resonate with editors at elite news organizations and poll coverage highlighted issues rather than candidate standings. By 2008, however, there had been a marked proliferation of polling organizations

offering pre-election poll numbers and news aggregator websites, such as Pollster.com and FiveThirtyEight.com, which collected, synthesized, and modeled poll findings to track the horse race with more putative accuracy than ever before. Apart from pre-election polls, news organizations sponsor or conduct numerous polls that serve as the focus of coverage or accompany broader stories. These polls are often pegged to milestones (e.g., anniversaries of events such as the beginning of the Iraq war or the State of the Union Address). The polls provide data that not only add systematic evidence on the state of American society but also, archived at such venues as the Roper Center for Public Opinion Research, give scholars the opportunity to track the public's viewpoints over time. Such polls serve as a more authoritative counterweight to the many man on the street interviews that populate much of journalistic coverage of current events. This is not to say that polls measure all significant trends in public views. As noted earlier, they address issues that sponsors feel are worth examining. Other forms of public opinion expression are useful to see what polls are not measuring. Taeku Lee has noted that letters addressed to elected officials may contain evidence of public concern about issues that are not judged as important by poll patrons. He argues for the analysis of constituency mail to understand the opinions of an active public during the Civil Rights era. He notes that the reliance of public opinion scholars on survey evidence may lead to erroneous conclusions about the roles of political elites and non-elites in public policy formation. This is so because routine survey practice may not capture the views of interested citizens whose attempts to influence government actions in a particular sphere do not coincide with survey efforts to measure public opinion in that topic area. During the Civil Rights Movement, in particular, survey measurement of opinion lagged behind the efforts of letter-writing African Americans to influence government policy. As a consequence of this disjuncture in timing, Lee argues that analyses of the Civil Rights Movement that rely on survey data overemphasize the role of elites such as Martin Luther King Jr. in the effort, neglecting the importance of hundreds of letters to the White House from ordinary citizens. More investigations of what the polls do not capture would be a valuable addition to research on public opinion.

What Is the Impact of Public Opinion?


It is fairly easy to make the case that public opinion measured in polls has little, some, or a great deal of influence on public policy. The case for minimal impact rests on the fact that public opinion measured in polls appears uninformed and contradictory and that politicians routinely eschew the idea that they pay attention to polls. In addition, the routine success of some organized interests in poli-cymaking over the expressed views of the public in polls (viz., gun control in the United States) casts doubt on the effect of polls on policy. A picture of more potent influence of polls can be drawn if we consider cases where data appeared to stand in the way of elite action. Bill Clinton's unwavering popularity in the polls almost certainly played a role during the Senate trial after his impeachment. The strongest case for the impact of polls is to be found in studies of White House polling operations, and in long term, aggregate measurements of public mood and policy making. The Nixon, Carter, Reagan, and Clinton White Houses had elaborate polling operations (Nixon, who actually wrote poll questions, kept his multiple polling operations secret, even from each other). The two Bush administrations also made use of polls, but endeavored to appear immune to public opinion influence. Political scientists who take the long view and examine aggregates of polls and policy actions have noted that the broad contours of public opinion do appear to lead general trends in policy.

What Is the Future of Polls?


Marked changes in lifestyles, cultural norms, and technology are having a major impact on polling. The heyday of polling was probably in the 1970s through 1990s when the telephone made it possible to do quick and relatively cheap polls. Compared to earlier times when timeconsuming face-to-face data collection was required (because probability sampling of telephone numbers had not yet been developed), the telephone era featured far more polls and more news organizations were able to sponsor them. The telephone era may now be coming to an end, due to decreasing willingness of people to respond to poll invitations, to increased use of call blocking technology, and to the rapid growth of cell phone usage, which is supplanting land line connections in increasing numbers of households. Pollsters are scrambling to deal with these developments. Meanwhile, the Internet has opened a new possibility for contacting poll respondents, but major obstacles stand in the way of its becoming the next dominant technology. There is no sampling frame of e-mail addresses that corresponds to the universal list of telephone numbers that can be sampled through random digit dialing. Therefore, there is no way to construct a probability sample of Internet users. Further, while Internet penetration is now quite high, it still falls short of telephone penetration, excluding many people from possible participation in web polls. Volunteer web panels are now a big business. They obviously include only those people who want to participate in surveys (in exchange for money or other gratuities). Thus, the panels are not representative of the larger population. Weighting schemes that seek to bring volunteer-based results in line with the general population lack validation. These developments may lead pollsters back to archaic methods such as the face-to-face interview or the mail questionnaire. Whatever the outcome, the future of polling rests on the ingenuity of methodologists and the financial investment of news organizations and other sponsors Peter V. Miller Further Readings

Entry Citation:
Miller, Peter V. "Polls and Public Opinion." Encyclopedia of Journalism. 2009. SAGE Publications. 18 Apr. 2010. <http://www.sageereference.com/journalism/Article_n300.html>.

You might also like