Professional Documents
Culture Documents
ebook download Experiencing Intercultural Communication: An Introduction 6th Edition (eBook PDF) all chapter
ebook download Experiencing Intercultural Communication: An Introduction 6th Edition (eBook PDF) all chapter
https://ebooksecure.com/download/experiencing-intercultural-
communication-an-introduction-ebook-pdf/
http://ebooksecure.com/product/ebook-pdf-experiencing-
phenomenology-an-introduction/
http://ebooksecure.com/product/ebook-pdf-an-introduction-to-
intercultural-communication-identities-in-a-global-community-9th-
edition/
http://ebooksecure.com/product/an-introduction-to-intercultural-
communication-identities-in-a-global-community-9th-edition-ebook-
pdf/
(eBook PDF) An Introduction to Intercultural
Communication: Identities in a Global Community Eighth
Edition
http://ebooksecure.com/product/ebook-pdf-an-introduction-to-
intercultural-communication-identities-in-a-global-community-
eighth-edition/
http://ebooksecure.com/product/ebook-pdf-understanding-
intercultural-communication-2nd-edition/
http://ebooksecure.com/product/ebook-pdf-intercultural-
communication-in-contexts-7th-edition/
http://ebooksecure.com/product/ebook-pdf-intercultural-
communication-a-contextual-approach-7th-edition/
http://ebooksecure.com/product/ebook-pdf-globalizing-
intercultural-communication-a-reader-1st-edition/
Types of Nonprobability Samples
Comparing Probability and Nonprobability Samples
Guidelines for Good Sampling
How Good Must the Sample Be?
General Advice
Supplemental Reading
6. Sampling II: Population Definition and Sampling Frames
Defining the Survey Population
Framing the Population
For More In-Depth Reading
7. Sampling III: Sample Size and Sample Design
Sampling Error Illustrated
Confidence Interval Approach to Sample Size
Power Analysis Approach to Sample Size
Nonstatistical Approaches to Sample Size
Stratified Sample Designs
Cluster Sampling to Improve Cost-Effectiveness
Computing Sampling Errors
Additional Resources
Methodology Appendix 1: Using Models in Sampling
8. Questionnaire Development I: Measurement Error and Question Writing
Measurement Error and Response Error
A Model of the Response Process
Response Task Problems
Questionnaire Design as Process
Factors in Questionnaire Development
Writing Questions
The Structure of Survey Questions
The Use of Qualifiers in Survey Questions
Response Categories
Rating Scales
Avoiding or Identifying Common Weaknesses in Survey Questions
Supplemental Readings
9. Questionnaire Development II: Questionnaire Structure
Introducing the Survey
What Questions Should the Questionnaire Begin With?
Grouping Questions Into Sections
Questionnaire Length and Respondent Burden
Formatting Instruments for Multimode Data Collection
Supplemental Reading
Methodology Appendix 2: Questionnaire Evaluation Workshop
10. Questionnaire Development III: Pretesting
Objectives of Pretesting
Types of Response Problems
8
Samples for Pretests
Pretesting Procedures
Conventional Pretests and Interviewer Debriefings
Postinterview Interviews
Behavior Coding
Cognitive Interviews
Respondent Debriefing
Expert Panel
Assessing Interviewer Tasks
Experimental Design for Question Evaluation
Revising and Retesting
Pilot Tests
Some Last Advice
For More In-Depth Reading
Methodology Appendix 3: Cognitive Interviewing Workshop
11. Data Collection II: Controlling Error in Data Collection
Measures of Survey Quality
Unit Nonresponse
Recent Increases in Nonresponse
Item Nonresponse
Balancing Survey Goals and Ethical Obligations to Participants
Controlling Error in Data Collection
Interviewer-Administered Surveys
Computer-Assisted Data Collection
Minimizing Item Nonresponse
Interviewer Effects
Self-Administered Surveys: Mail and Internet
Data Collection Costs and Contingencies: Planning for the Unexpected
For More In-Depth Reading
Methodology Appendix 4: An Overview of Organization Surveys
12. Postsurvey Statistical Adjustments and the Methodology Report
Nonresponse Bias
Data Adjustments: Weighting and Imputation
The Methodology Report
Surveys in Other National and Cultural Contexts
Supplemental Reading
Appendix A: University of Maryland Undergraduate Student Survey
Appendix B: Maryland Crime Survey
Appendix C: AAPOR Code of Professional Ethics
Appendix D: Internet Resources
References
Index
9
10
PREFACE
T
here were 9 years between the first (1996) and second (2005) editions of this book. Although fewer years have
elapsed since that last edition than between the first two, the revisions are substantially larger, reflecting the
evolution of a field driven by rapid technological and societal change, but also by the effect of advances in
survey methodology. In addition to the necessary updates, we have expanded the technical range of the book.
First, the chapters on sampling (Chapters 5 7) have been completely restructured and expanded. This revision
is consistent with the increased focus on survey error (Chapter 2) as a framework for thinking about each stage
of survey design and implementation. Second, we have added four Methodology Appendixes to address some
topics of special interest (Using Models in Sampling and An Overview of Organization Surveys) and to
provide in-depth workshop treatments on topics that are crucial to containing measurement error, often the
major threat to survey quality (Questionnaire Evaluation Workshop and Cognitive Interviewing Workshop).
In this edition, we have also taken a broader view of the needs of diverse researchers who may employ the
survey method. We use examples drawn from a wide range of social research—including sociological, political,
educational, public health, marketing, and business applications—and we consider issues faced by researchers
in varying contexts, including academic versus nonacademic research, surveys of organizations versus
households or individuals, and cross-cultural surveys.
The treatments of data collection (Chapters 4 and 11) and questionnaire development (Chapters 8–10)
reflect a wider range of methodologies (including online, cell phone, intercept, and multimode surveys), as
well as contributions to survey practice from recent survey methods research (e.g., the use of cognitive models
for understanding the response process and effective design of response categories and rating scales). Finally,
from Chapter 1 (Survey Practice) through Chapter 12 (Postsurvey Statistical Adjustments and the
Methodology Report), we have integrated the ethical treatment of research subjects into survey design,
questionnaire construction, and data collection.
The guidance of our editor, Vicki Knight, has been essential in this effort. The technical assistance
provided by Nadra Garas was critical, and the efficient support of Kalie Koscielak and Lyndsi Stephens of
SAGE Publications contributed greatly to the timely editing and processing of the manuscript. We appreciate
the extensive, helpful comments on a draft of the book provided by reviewers engaged by SAGE Publications:
Jennifer Reid Keene, University of Nevada, Las Vegas; Lisa House, University of Florida; Julio Borquez,
University of Michigan, Dearborn; Yasar Yesilcay, University of Florida; Chadwick L. Menning, Ball State
University; Brenda D. Phillips, Oklahoma State University; Bob Jeffery, Sheffield Hallam University; Michael
F. Cassidy, Marymount University; Young Ik Cho, University of Wisconsin, Milwaukee; Richard J. Harris,
University of Texas at San Antonio; Valentin Ekiaka Nzai, Texas A&M University, Kingsville; and Mary-
Kate Lizotte, Birmingham-Southern College.
In particular, we want to recognize our intellectual debt to the late Seymour Sudman, a seminal figure in
modern survey research, who was our teacher and mentor for many years, beginning when we were all, in
various capacities, at the University of Illinois Survey Research Laboratory. This book would not have been
possible without the guidance of his scholarship and his example as a researcher.
11
ABOUT THE AUTHORS
Johnny Blair is an independent consultant in survey methodology. Previously, he was a Principal Scientist at
Abt Associates and a manager of survey operations at the University of Maryland Survey Research Center and
the University of Illinois (Urbana-Champaign) Survey Research Laboratory. Over a 40-year career in survey
research, he has designed and/or implemented surveys for health (including HIV high-risk populations),
education (including large-scale student assessments), environment (including contingent valuation), and
criminal victimization (including proxy reporting) surveys, among other areas. He has conducted
methodological research on sampling rare populations, measurement error in proxy reporting, cognitive and
usability testing of computer-based student writing assessments, and data quality in converted refusal
interviews. He has been involved in a decade-long program of research on cognitive interview pretesting, most
recently on the theory of pretest sample size and the validation of pretest problem identification. He has been
a member of the editorial board of Public Opinion Quarterly, has served on several National Research Council
Panels, and has been a consultant to many federal agencies, academic organizations, law firms, and other
companies. Since 1996, he has served on the Design and Analysis Committee (DAC) for the National
Assessment of Educational Progress, NAEP, the Nation’s Report Card.
Ronald F. Czaja is Associate Professor Emeritus of Sociology and Anthropology at North Carolina State
University. He taught courses in both undergraduate and graduate research methodology and medical
sociology. His methodological research focused on sampling rare populations, response effects in surveys, and
the cognitive aspects of questionnaire design. From 1969 to 1990, he worked at the Survey Research
Laboratory, University of Illinois at Chicago, as project coordinator, co-head of sampling, assistant director,
and principal investigator.
Edward A. Blair is the Michael J. Cemo Professor of Marketing & Entrepreneurship and Chair of the
Department of Marketing & Entrepreneurship in the C. T. Bauer College of Business, University of
Houston. He has served as chair of the American Statistical Association Committee on Energy Statistics,
which advises the U.S. Energy Information Administration on the gathering, analysis, and dissemination of
information. He also has served on the U.S. Census Bureau Census Advisory Committee of Professional
Associations and as a National Science Foundation panelist in Innovation and Organizational Change. His
research has been published in journals such as the Journal of Marketing, the Journal of Marketing Research, the
Journal of Consumer Research, Public Opinion Quarterly, Sociological Methods and Research, the Journal of
Advertising Research, the Journal of the Academy of Marketing Science, the Journal of Retailing, and elsewhere. He
has served on the editorial boards of the Journal of Marketing Research, the Journal of the Academy of Marketing
Science, and the Journal of Business Research and as national conference chair for the American Marketing
Association (AMA). In addition to teaching university classes, Dr. Blair has taught in a variety of professional
programs, including the AMA’s School of Marketing Research (for research professionals) and Advanced
Research Techniques Forum.
12
ONE
SURVEY PRACTICE
I
t would be difficult to name another social science method that has so quickly and pervasively penetrated our
society as the sample survey. In fewer than two generations, the notion of relying on relatively small samples
to measure attitudes and behaviors has grown from a little-noted curiosity to the dominant data collection
practice. Surveys are used by academic researchers, governments, businesses, political parties, media, and
anyone who wants insights into what people are thinking and doing. Survey data underlie our knowledge
regarding
The requirements of these diverse survey applications have naturally spawned a wide range of performance
practices. How a survey is designed and implemented for a federal agency is vastly different than one for a
newspaper poll.
A large and rapidly expanding survey literature reflects both the numerous applications of survey data
collection and the inherently multidisciplinary nature of survey research. To successfully design and
implement a survey, we need to understand the basics of a few disciplines and techniques. At some points, we
can rely on scientific understanding and training; at others, we need a knowledge of accepted practices and,
throughout the process, a healthy dose of common sense.
13
WHAT IS A SURVEY?
Surveys collect information by interviewing a sample of respondents from a well-defined population. The
survey population may comprise individuals, households, organizations, or any element of interest. The
boundaries of the population may be defined by demographic characteristics (e.g., persons 18 years of age or
older), geographic boundaries (residing in Maryland), behaviors (who voted in the last election), intentions
(and intend to vote in the next election), or other characteristics. The population should be defined so that its
members can be unequivocally identified. In addition, we must be convinced that the majority of respondents
will know the information we ask them to provide. It makes little sense to ask people questions, such as the
net worth of their family that many in the targeted population, maybe most, will not be able to answer.
Surveys can be conducted in person, by phone, by mail, or over the Internet, among other methods. The
identifying characteristic of a survey interview is the use of a fixed questionnaire with prespecified questions.
The questions are most often, but not always, in a closed format in which a set of response alternatives is
specified. Using a fixed questionnaire allows a researcher to control the interview without being present,
which allows the interview to be conducted at relatively low cost, either through self-administration by
respondents (as in Internet or mail surveys) or through administration by interviewers who, although trained,
are typically paid at a modest rate. The resulting data are then entered into a data file for statistical analysis.
Surveys are, of course, not the only method used by social researchers to gather data. Alternatives include
observation, depth interviews, focus groups, panels, and experiments. Key points of comparison between
surveys and other methods, as well as examples of how other methods are sometimes used to support survey
design, are as follows:
• As the term suggests, we gather observational data by observing events rather than by asking questions.
Observation can capture information about inanimate phenomena that can’t be questioned directly, and
observation doesn’t suffer from respondents misunderstanding the question, forgetting what happened, or
distorting their answers to make a good impression. However, for observational data to be feasible, the
phenomenon of interest must be observable; mental states such as attitudes or intentions are out. After an
election, we can observe how precincts voted, but before the election, we cannot observe how they intend to
vote. We can observe how precincts voted, but not why they voted that way. It also may not be cost-effective
to gather observational data. We can observe how someone uses public transportation by following that person
for a month, but it is much less costly to ask him or her about last month’s behavior.
In some situations, observation may be used to learn more about a particular population while developing
plans for a survey. For example, in preparing for a survey of parents about their children’s dietary habits, we
observed kids at a few school lunches. Behaviors such as discarding, sharing, or exchanging foods between
children led to some improvements in the parent survey. More important, it showed that some of the
children’s eating could not be fully reported by the parents.
• Depth interviews, like surveys, gather data through questioning. However, depth interviews do not use a
fixed questionnaire. The interviewer usually has a list of topics to be covered and may use fixed questions to
get respondents started on these topics, but the overall goal is to let respondents express their thoughts freely
and to probe as needed. This approach is good for getting deep, detailed, complex information that doesn’t
work well in a survey. However, these interviews usually must be administered in person, and a highly skilled
interviewer is needed to manage the unstructured interaction, resulting in much higher cost than a survey
interview and consequently less ability to interview a broad sample. So, for example, if you want to know how
top officials in city government interact and make decisions, then you might do depth interviews with a small
number of city leaders, but if you want to know how the broad electorate rates the performance of city
government using standard questions, a survey will be more useful.
In planning a survey, depth interviews can be used to uncover issues that are important to include in the
questionnaire, learn how potential respondents think about the topic, develop response alternatives, and learn
how the population might react to survey procedures (such as those for gaining cooperation). This can be
especially useful when the population and/or topics are not familiar to the researcher.
• Focus groups, like depth interviews, do not use a fixed questionnaire. Unlike depth interviews, which are
conducted one-on-one, focus groups facilitate interaction among group participants. This interaction can
inform the researcher about reasons for differences in a population on an issue, or other social dynamics. As
14
with depth interviews, focus groups are sometimes used in the survey planning process to uncover issues that
are important to include in the questionnaire, learn how potential respondents think about the topic, develop
response alternatives, and learn how the population might react to survey procedures.
• Panels are groups of research participants that provide information over time. This information can be
collected through observation or self-reports, including diaries where people record their behavior over time or
repeated surveys of the same respondents. The greatest strength of panels is that measures of change over time
are precise and not subject to variation due to shifting samples of respondents. The greatest weakness of panel
research is the fact that many people are not willing to accept the commitment required by a panel, while
many others drop out after a short stay, so it is difficult to keep panels representative of the population. Also,
panels tend to be more expensive than most onetime surveys because panel members must be given some
reward for their ongoing effort, while most surveys do not reward respondents. Therefore, if a researcher
wants to study how political preferences evolve through a campaign, she might recruit a panel of registered
voters and track them, but if her focus is on accurate estimates of candidate preference at any given point in
time, she is more likely to use a series of independent surveys.
• An experiment is a study in which the researcher actively manipulates one or more experimental variables,
then measures the effects of these manipulations on a dependent variable of interest, which can be measured
by either observation or self-report. For example, a researcher interested in knowing which of two
advertisements has a stronger effect on willingness to buy a product could conduct a survey that measures
respondents’ awareness of each ad and willingness to buy, and correlate willingness to buy with awareness of
each ad. Alternately, the researcher could show participants either one ad or the other, with random
assignment, and measure subsequent willingness to buy. The goal of experimentation is to verify that the
observed relationships are causal, not just correlational.
In survey research, experiments have proved to be powerful tools for studying effects of survey question
wording on response. Much of what we know about how to write good survey questions is a product of
experimental research, in which alternative versions of questions intended to measure the same construct are
compared. For example, through experimentation, researchers learned that asking whether respondents
thought a certain behavior should be “allowed” or “forbidden” can produce different response distributions,
even when the alternative wordings logically mean the same thing. There is a very large literature on this
response effects research. Classic works include Sudman and Bradburn (1974), Schuman and Presser (1981),
and Tourangeau, Rips, and Rasinski (2000).
Exhibit 1.1 summarizes how surveys relate to other data collection methods.
Observation Not subject to reporting bias Can’t measure mental states; not efficient for
measuring infrequent behaviors
Depth Can probe freely and go into depth Expensive, poor population coverage
interviews
Focus Can probe freely and go into depth; can Expensive, poor population coverage
groups see social dynamics
Panels Shows changes over time Expensive; a limited number of people will
participate
15
16
THE COMBINATION OF DISCIPLINES
Survey research is inherently interdisciplinary. Sampling and estimation have a theoretical basis in probability
theory and statistics; to select an efficient sample requires some knowledge of those areas. Data collection
involves persuasion of respondents and then, on some level, social interaction between them and interviewers.
Developing questionnaires and conducting interviews require writing skills to construct questions that elicit
desired information using language that respondents can easily understand and do not find too difficult to
answer. Interviews or questionnaires that use computers or the Internet require programming or other
specialized skills. Very few survey professionals have hands-on expertise in all of these areas, but they do have
a basic understanding of what needs to be done to successfully implement each part of a survey.
Unlike some scientific or scholarly enterprises, surveys are usually a team effort of many people with diverse
skills. One can find examples of surveys designed and implemented by the lone researcher, but they are the
exception. Even if the researcher who formulates the research questions also designs the questionnaire and
analyzes the data, that person will almost always use help in sample design, data collection, and database
construction. Whether a survey is done by a research organization or as a class project, there is division of
labor, coordination of tasks, and management of the costs.
To design and implement a quality survey within available resources, the practitioner relies on a relatively
small number of statistical principles and practical guidelines. The goal of this book is to explain those
fundamentals and illustrate how they are applied to effectively conduct small- to moderate-scale surveys.
17
THE SOCIAL AND SOCIETAL CONTEXT
Survey practices and methodology change as our knowledge grows and our experience increases. And, of
course, just as in every other field, changes in technology affect survey design and implementation. It is also
important to be sensitive to the impact of societal, demographic, and cultural changes on survey practice.
For example, 40 years ago, most national surveys in the United States were conducted only in English. The
proportion of the population that was fluent only in some other language was considered small enough that
the cost of interviewing in those languages could not be justified. The decrease in coverage was too small to be
of concern. Today, any important national survey allows for, at a minimum, Spanish-language interviews and
often interviews in additional languages. This allows inclusion of both people who do not speak English at all
and those who, although they can communicate in English, are much more at ease in their first language.
Likewise, many states, or smaller areas, have large enclaves of non-English-language groups. The California
Health Interview Survey is conducted in several Asian languages, partly for reasons of coverage of the entire
state population, but also because of the need to sample enough people in some of these groups for their
separate analysis.
The need to accommodate different languages is just part of a larger imperative to be aware of different
cultural norms. A more detailed discussion is beyond the scope of this book, but this is just one example of
how changes within society affect how we go about conducting surveys.
General social norms can also have important effects. People’s willingness to allow a stranger into their
homes has greatly changed since the middle of the last century when in-person household interviews were the
norm. Such surveys are still conducted, but the costs and procedures necessary to make them successful have
limited the practice to only the most well-funded programs. Similarly, in recent decades, the rise of
telemarketing and telephone screening devices has affected researchers’ success in contacting members of the
general population by telephone and, once contacted, securing their participation in telephone surveys. The
rise of cell phones and the Internet continues the accelerating technological change that can be a benefit or an
obstacle to conducting surveys. One indicator of the impact of these factors on survey practice can be seen in
the shifting proportions of surveys administered by mail, phone, web, or in person, as shown in Figure 1.1.
Societal changes can occur at different rates in different parts of the population. Some new technologies, for
example, may be initially adopted more heavily by younger than older people or by the more affluent than less
affluent. Such patterns can become fixed or be only a step along the way to wider diffusion through the
population. How important it is for a survey designer to take account of some technical development or
changing social norm may depend on what population the survey targets.
18
Source: Adapted from information in the newsletter Survey Research, a publication of the Survey Research Laboratory, University of Illinois,
Chicago and Urbana-Champaign.
19
ETHICAL TREATMENT OF SAMPLE MEMBERS
A subtext in the discussion of the societal context of a survey is that the survey designer must accommodate
the potential survey respondents. Respondents are volunteers we depend on. They are not obligated to
participate, but the extent to which they agree to participate affects the survey’s success, its cost, and, in some
ways, its quality. Later, we will consider in detail methods to elicit the cooperation of sample members.
However, the methods we can use are not just determined by their effectiveness. There are ethical boundaries
we cannot cross. There are two concepts central to our treatment of respondents: informed consent and
protection of confidentiality.
While we apply extensive efforts to obtain respondents’ cooperation in the survey, the respondents’
agreement must be reasonably informed. This means that we must not mislead respondents as to the nature
and purpose of the research. We must honestly answer their questions about the project, including who is
sponsoring it, its major purposes, the amount of time and effort that will be required of respondents, the
general nature of the subject matter, and the use that will be made of the data. We must not badger or try to
intimidate respondents either into participating or into answering particular questions after they agree to be
interviewed.
Once respondents have agreed to be interviewed, we then assume an obligation to protect the
confidentiality of their answers. This is true whether or not we have explicitly told respondents we will do so.
Results or data sets that permit the identification of individual respondents should never be made available to
others.
These ethical guidelines are recognized by the major professional organizations of survey researchers and
are typically overseen by human subjects review committees at universities and other organizations that engage
in population research.
These obligations are no less applicable when a project is conducted by a class or a large team of researchers
than when a single researcher is involved. In fact, additional cautions may need to be observed in the former
situation because there are additional opportunities for inadvertent breaches of these ethical guidelines when
many people are privy to the sample and the data.
Revealing or discussing an individual respondent’s answers outside of the research group is inappropriate.
Also, it is not proper to recontact survey respondents for purposes not related to the research for which they
originally agreed to participate. The sample list used for a survey should not be made available to others (even
other legitimate researchers) without the additional consent of the respondents. If the data are made available
to another party, all identifiers that would permit linking answers to individuals should be removed.
20
APPROACH AND OVERVIEW
The main concern in designing and conducting a survey is to achieve the research or other data collection
objectives within available resources. Sometimes the initial objectives may be adjusted along the way to
accommodate resource constraints or practical obstacles, but we must not lose sight of them. For example,
assume that during planning, we determine that the budget is insufficient for both the preferred sample size
and number of survey questions. In such a circumstance, it may be necessary either to reduce the sample size
and forego separate analyses of some subgroups that will not be possible with fewer respondents or to reduce
the length of the interview and eliminate topics of secondary interest. But we still proceed toward a clear set of
objectives. Along with those objectives, we need a sense of how good the survey needs to be for our purposes.
“Good” can be specified in a number of ways, which we will return to. Many conceptions of survey quality
involve the idea of accuracy or, conversely, of error. In the next chapter, we consider the types of error surveys
are subject to. Following this, we provide an overview of survey planning and implementation, from
determining the main survey methods to the decisions and tasks at each stage of carrying out the survey. The
means of collecting the data is, along with sample size, the main factor in survey cost. Factors in choosing a
method of data collection are covered in the next chapter. Three chapters each are devoted to sampling and
then to questionnaire development. Following these topics is a chapter on conducting data collection while
controlling the main sources of error in that stage of the survey. The last chapter describes the key post–data
collection activities, including preparing a report of the survey methodology.
Two Methodology Appendixes deal with topics of particular importance, at a greater level of detail:
Questionnaire Evaluation Workshop and Cognitive Interviewing Workshop. Two other Methodology
Appendixes focus on Using Models in Sampling and An Overview of Organization Surveys. Throughout the
book, we emphasize the value of resources available on the Internet. Appendix D lists several sources that are
of particular importance themselves or that provide links to additional information.
21
TWO
SURVEY ERROR
S
urvey researchers use the term error to refer to “deviations of obtained survey results from those that are true
reflections of the population” (Groves, 1989 p. 6). In this chapter, we provide a framework for understanding:
Survey findings are essentially probabilistic generalizations. The sampling process provides a statistical basis
for making inferences from the survey sample to the population. The implementation of the sampling and
data collection are subject to error that can erode the strength of those inferences.
We can visualize how this erosion happens if we imagine the perfect sample survey. The survey design and
questionnaire satisfy all the research goals. A list is available that includes accurate information about every
population member for sampling purposes. The selected sample precisely mirrors all facets of the population
and its myriad subgroups. Each question in the instrument is absolutely clear and captures the dimension of
interest exactly. Every person selected for the sample is contacted and immediately agrees to participate in the
study. The interviewers conduct the interview flawlessly and never—by their behavior or even their mere
presence—affect respondents’ answers. The respondents understand every question exactly as the researcher
intended, know all the requested information, and always answer truthfully and completely. Their responses
are faithfully recorded and entered, without error, into a computer file. The resulting data set is a model of
validity and reliability.
Except for trivial examples, we cannot find such a paragon. Each step in conducting a survey has the
potential to move us away from this ideal, sometimes a little, sometimes a great deal. Just as all the processes
and players in our survey can contribute to obtaining accurate information about the target population, so can
each reduce that accuracy. We speak of these potential reductions in accuracy as sources of survey error.1
Every survey contains survey errors, most of which cannot be totally eliminated within the limits of our
resources.
To design and implement an effective survey, we must recognize which sources of error pose the greatest
threat to survey quality and how the design and implementation of the survey will affect its exposure to various
sources of error. The critical sources of error will differ depending on the survey objectives, topic, population,
and its methods of sampling and data collection.
22
TYPES AND SOURCES OF ERROR
When we work with survey data, we typically want our data to provide an accurate representation of the
broader population. For example, assume that your friend is running for election to the local school district
board, and you agree to do a survey to help her learn what people think about the issues. If your data show
that 64% of the respondents want the school district to place more emphasis on reading skills, you want this
64% figure to be an accurate reflection of feelings among the population at large. There are three reasons why
it might not be:
Sampling Error
Sampling error refers to the fact that samples don’t always reflect a population’s true characteristics, even if
the samples are drawn using fair procedures. For example, if you flip a coin 10 times, then 10 more times,
then 10 more times, and so on, you won’t always get five “heads” out of 10 flips. The percentage of “heads” in
any given sample will be affected by chance variation. In the same way, if you ask 100 people whether the
school district should cut other programs to place more emphasis on reading, the percentage that answer
affirmatively will be affected by chance variation in the composition of the sample.
The level of sampling error is controlled by sample size. Ultimately, if the sample encompasses the entire
population (i.e., the sample is a complete census of the population), then sampling error would be zero,
because the sample equals the population. More generally, as samples get larger and larger, the distribution of
possible sample outcomes gets tighter and tighter around the true population figure, as long as the samples are
not biased in some way. To put it another way, larger samples have less chance of producing results that are
uncharacteristic of the population as a whole. We expand upon these points in Chapters 5, 6, and 7 and show
how to set sample size based on acceptable levels of sampling error.
Sample Bias
Sample bias refers to the possibility that members of a sample differ from the larger population in some
systematic fashion. For example, if you are conducting a survey related to a school bond election, and you limit
interviews to people with school children, then your results may not reflect opinions among voters at large; if
nothing else, people with school children may be more likely to support bonds to pay for school
improvements. Similarly, if your sample contains a disproportionately large percentage of older people because
you didn’t make the callbacks needed to find younger people, or if your sampling procedure underrepresents
married versus single people, then you are likely to have sample bias.
Sample bias can arise in three general ways. Coverage bias will occur if some segment of the population is
improperly excluded from consideration in the sample or is not available through the method employed in the
research. Conducting interviews near school grounds with parents as they arrive to pick up their children and
implicitly limiting the sample to people with school children is an example of potential coverage bias if the
population of interest is the broader electorate. Likewise, conducting a telephone survey implicitly limits the
sample to people with telephones, and conducting an online survey implicitly limits the sample to people with
online access, either of which will exclude some low-income households, resulting in potential coverage bias if
these households differ from the broader population on the items being measured.
Selection bias will occur if some population groups are given disproportionately high or low chances of
selection. For example, if you conduct telephone interviews using conventional household landlines, and you
take the first adult to answer the phone, then married people will only have one chance in two of being
selected within their household, while single people have one chance in one, and married people will thus tend
to be underrepresented in a sample of possible voters.
Finally, even if the sample is fairly drawn, nonresponse bias will occur if failure to respond is
disproportionate across groups. For example, if you send e-mails inviting people to click through to a survey
on issues related to the local schools, people who are less interested in those issues are probably less likely to
23
respond. If one of the topics on the survey concerns willingness to pay higher taxes to support the schools,
there may be substantial differences between respondents and nonrespondents on this topic, resulting in
nonresponse bias.
Unlike sampling error, sample bias is not controlled by sample size. Increasing the sample size does nothing
to remove systematic biases. Rather, sample bias is controlled by defining the population of interest prior to
drawing the sample, attempting to maximize population coverage, selecting a sample that fairly represents the
entire population, and obtaining data from as much of the selected sample as possible.
Nonsampling Error
Nonsampling error consists of all error sources unrelated to the sampling of respondents. Sources of
nonsampling error include interviewer error related to the administration of the survey, response error related
to the accuracy of response as given, and coding error related to the accuracy of response as recorded. All of
these may result in either random or systematic errors in the data. If random, they will add to random
sampling error to increase the level of random variation in the data. If systematic, they will be a source of bias
in the data, much like sample bias.
One source of interviewer error is cheating; the interviewer fails to administer the questionnaire, or portions
of the questionnaire, and simply fabricates the data. Cheating is controlled through some form of validation,
in which a supervisor or a third party verifies that the interview was conducted and key questions were asked.
This is typically done by telephone; when the interview is conducted, whether face-to-face or by phone, the
respondent is asked for a telephone number and told that a supervisor may call to verify the interview for
quality control purposes. Typically, 10% to 20% of interviews are validated. For surveys that are done on
location, validation may be done on site by a supervisor. For telephone surveys that are conducted in
centralized telephone interviewing facilities, validation may be replaced by monitoring interviews as they are
conducted, so the interviewer never knows when the supervisor will be listening in.
Interviewer error also may stem from question administration error, in which the interviewer does
something other than read the question as intended, or probing error. Regarding errors in question
administration, the interviewer may skip a question that should not be skipped, ask a question that should be
skipped, omit part of the question, misread the question, add to the question, or add some vocal inflection or
body language that conveys a point of view. Any of these actions, of course, may influence the response. With
respect to probing, if the respondent expresses confusion about the question or gives an inadequate answer,
interviewers are usually instructed to give neutral probes that do not presume an answer or convey a point of
view; for example:
• If the respondent says he or she did not follow the question, simply reread it.
• If the respondent asks about the meaning of some term, read a preset definition or say “whatever it means
to you.”
• If the respondent gives an answer to an open-ended question that is insufficiently specific, ask, “Can you
be more specific?” or “When you say X, what do you mean?”
• If the respondent gives an answer to a closed-ended question that does not match the categories, ask,
“Would you say [reread categories]?”
If the interviewer fails to probe an answer that should be probed, probes in a manner that changes the
question, presumes a certain answer, or leads the respondent toward a certain answer, then any of these
actions may influence the quality of response.
Question administration error and probing error are controlled through interviewer training. When first
hired, interviewers receive training that covers general principles such as “ask the question as written” and
when and how to probe. Subsequently, interviewers are given specific instructions and training for any survey
that they work on. This training normally includes at least one practice interview conducted under a
supervisor’s direction. The better the training, the lower the exposure to errors of this type. Also, data
collection may be spread across multiple interviewers, so any idiosyncrasies among interviewers manifest
themselves as variance across interviewers rather than bias from a single data source.
The next general source of nonsampling error is response error. Response error may occur for at least three
different reasons. Comprehension error will occur if the respondent does not understand a question correctly
or if different respondents understand it in different ways. For example, in a survey of school district issues, if
24
you ask respondents what priority they would give to programs for the development of reading skills, will they
know what you mean by programs and skills? Will some of them think about instruction for preschoolers while
others think about older students with reading deficiencies? Will this affect their answers? Comprehension
errors often stem from questions that are poorly thought out or poorly written, and they are controlled
through careful question design as well as pretesting the questionnaire to ensure that respondents understand
the questions as intended.
Knowledge error may occur if respondents do not know the answer to the question, or cannot recall it
accurately, but still provide an answer to the best of their ability. For example, if you ask parents how often
their children fail to eat all of their lunches at school, will parents know the answer? If you ask parents how
many hours per evening their children have spent on homework over the past month, will they be able to
recall accurately over this time frame? As with other forms of nonsampling error, knowledge errors may be
either random or systematic and hence a source of either increased variance or bias in the data. These errors
are generally controlled by getting the right respondent, screening for knowledge if appropriate, writing
questions that are realistic about what people might know, and matching the time frame of the question to a
plausible recall period for the phenomenon in question.
Reporting error may occur if respondents hesitate to provide accurate answers, perhaps because certain
responses are socially desirable or undesirable, or perhaps because respondents wish to maintain privacy. For
example, if you ask parents to report their children’s grades, they may be tempted to give answers that are
higher than the truth. If you ask parents to report how often their children have been given disciplinary
actions at school, they may underreport. Reporting errors may be random but often are a source of systematic
bias for sensitive questions These errors are controlled in a variety of ways, including professionalism in the
survey to convince respondents that you care whether they answer but not what they answer, or using self-
report methods so the respondent does not have to present himself or herself to an interviewer.
The final source of nonsampling error is coding error. For open-ended questions, the coder may
misinterpret the answer and assign the wrong numerical code to be entered in the data. For categorical
questions, the interviewer (or respondent) may mark the wrong category. Also, for paper questionnaires,
interviewers or coders may select the correct code or category, but at the data entry stage, someone may enter
data incorrectly. To control such errors, open-ended questions may be assigned to at least two coders to
identify and resolve inconsistencies. Likewise, data may be double-entered to identify and resolve
inconsistencies. Also, questionnaires are edited to verify that the answers appear to be correctly recorded and
to check for interitem consistency where applicable (this process is automated for computer-based
questionnaires). In some cases, respondents may be recontacted to resolve apparent errors or inconsistencies.
Overall, nonsampling error is controlled, to the extent possible, by doing a good job of training and
supervising interviewers, using good questionnaire design, and exercising good control over coding and data
entry.
25
Another random document with
no related content on Scribd:
Et son âme s’envola dans la gloire de Dieu.
Hilarem datorem, celui qui se donne en riant, c’est ainsi que
l’Église qualifie saint Laurent dans l’office qu’elle lui a consacré.
Ah ! me disais-je, durant la bienheureuse octave où à toutes les
heures de la journée, je gardais présente l’image du martyr sur son
gril, avec quel rire reconnaissant, mon Dieu, je devrais recevoir les
épreuves qu’il vous plaît de m’envoyer pour mon plus grand bien.
Saint Laurent obtenez-moi cette grâce !…
Or, il me l’obtint. Et c’est pourquoi je pus reprendre ma croix et la
trouver plus légère.
Je vous le demande : comment au souvenir de cette radieuse
semaine, ne vouerais-je pas mon affection et mon entière gratitude
au Saint qui m’assista de la sorte ?
Lapillus parle :
« L’autre jour, à la Messe, le prêtre allait consacrer l’Hostie. Je
suivais ses gestes avec attention, cela va sans dire ; mais je dois
mentionner que, me préparant, comme c’était mon devoir, à
l’adoration de Notre-Seigneur descendu sur l’autel, je le faisais par
habitude et sans que mon âme fût avertie, par un surcroît de ferveur,
de ce qui l’attendait à cette minute même. J’étais recueilli, et rien de
plus.
« Je prononçais les mots Pridie quam pateretur : A la veille de
souffrir… quand, tout à coup, je sentis que Jésus était là, tout près
de moi. Ce fut si brusque que j’en ressentis d’abord un peu de
crainte. Mais aussitôt une grande paix s’établit en mon être. Je ne
voyais Jésus ni des yeux du corps, ni des yeux de l’imagination, ni
extérieurement, ni intérieurement. Et pourtant, sa présence m’était si
évidente qu’il m’était impossible d’en douter, d’autant qu’elle se
manifestait par un courant d’amour qui me submergea.
« C’était d’ailleurs tout intellectuel. Je veux dire que ce n’était
pas, comme en d’autres occasions, ma sensibilité qui était atteinte
mais mon entendement.
« Un peu après que mon intelligence eut été conquise de la
sorte, je dus répéter : A la veille de souffrir… Alors, non en images,
mais par une opération de l’esprit dont je ne saurais rendre compte,
par un regard essentiel qui plongeait plus loin que les apparences, je
reçus, dans l’espace d’une seconde, une clarté synthétique sur ce
que Jésus me signifiait par cette petite phrase interrompue soudain.
« Ce que je viens de t’exposer paraîtra très obscur à beaucoup.
C’est pourquoi comme cela comporte un avertissement des plus
salutaires, je vais essayer de te transposer dans un langage moins
abstrait et de te développer dans l’ordre des sentiments ce qu’une
idée, aussi rapide et lumineuse qu’un éclair m’apporta d’une façon si
insolite. Mais retiens que je n’arriverai pas à exprimer tout ce que la
présence de Jésus lui conférait d’autorité souveraine.
« J’assistais à la Cène. Le Maître, « prit du pain dans ses mains
saintes et vénérables ». Au moment de le rompre non seulement
pour les Apôtres mais pour tous les fidèles de tous les temps, il me
parut qu’il était infiniment triste parce qu’il voyait combien notre
union à son sacrifice serait imparfaite.
« Sa pensée me fut transmise sans qu’il articulât une parole,
mais c’était comme s’il m’avait dit à moi et à toute l’Église :
« Ma Passion durera jusqu’à la fin du monde. Par vous, avec
vous, en vous, mes bien-aimés, toujours je serai à la veille de
souffrir, toujours je souffrirai. Ma souffrance ira s’accroissant du fait
que vous demandez à vous nourrir de moi et qu’en retour, vous ne
me donnez pas vos âmes avec l’abnégation que je vous ai prescrite.
Vous donnez à l’orgueil, vous donnez à l’envie, vous donnez à
l’avarice, vous donnez à la paresse, à la colère, à la gourmandise, à
la luxure. Le peu qui vous reste, vous me l’apportez comme une
aumône dérisoire à un pauvre dont la plainte vous importune.
« Je suis ce pauvre. Je suis le Pauvre absolu qui mendie pour
avoir prodigué sa chair et son sang aux ingrats qui avaient faim de
lui. Je me suis partagé tout entier entre vous. Maintenant je vous
demande votre amour sans restriction. Comment me répondez-
vous ?
« Mes Saints qui ne conçoivent pas d’autre volonté que la
mienne, se sont donnés généreusement. Mais vous, qui préférez la
plupart du temps votre volonté à la mienne, vous réduisez votre don
à la mesure de vos cœurs étroits. Je vous apportais tous les trésors
du Ciel. Vous m’offrez un petit sou démonétisé, rongé de vert-de-gris
par votre égoïsme.
« Je vous ai prévenu que pour mériter la béatitude éternelle il
vous fallait acquérir l’esprit de pauvreté ; je vous ai averti que votre
récompense serait incommensurable si, à cause de moi, vous
supportiez avec bonheur la haine, les malédictions, les persécutions,
les calomnies et les mépris de ceux qui me haïssent, me
maudissent, persécutent, calomnient et chargent d’opprobre mon
Église.
« Mais vous n’avez pas eu confiance dans ma parole. Vous
alléguez que, par la faute de votre nature mauvaise, vous n’arrivez
pas à comprimer vos penchants délétères et que, par suite, vous ne
pouvez aspirer à la Sainteté. Est-ce que je ne le sais pas mieux que
vous ? Est-ce que pour raffermir votre volonté, infirme pour le bien,
je ne vous offre pas sans cesse ma Grâce ? Est-ce que je ne vous
applique pas les mérites de ma Croix ? Mais ma Grâce, mais les
vertus de mon supplice les sollicitez-vous chaque fois que vous êtes
sur le point de pactiser avec le monde et ses tentations ? Et quand
vous les sollicitez, est-ce avec le ferme désir d’abattre toutes les
murailles qui la séparent de votre cœur ?
« Eh bien, parce que vous êtes les frères du Fils de l’Homme, je
vous pardonne votre inconstance. Aujourd’hui, comme tous les jours,
je veux souffrir pour votre rédemption : à l’appel du prêtre, je
descendrai sur l’autel. Voici mon corps ; Voici mon sang. Il dépend
de vous que mon sacrifice ne soit pas offert en vain. Efforcez-vous
d’y correspondre, tâchez, une minute, de vous oublier, de ne penser
qu’à moi seul. — Alors vous ne lirez plus cette tristesse dans mon
regard. »
In figuris præsignatur,
Cum Isaac immolatur :
Agnus Paschæ deputatur :
Datur manna patribus.
Melchisédec est une des figures les plus mystérieuses de la
Bible. Il apparaît, au lointain des âges, dans une pénombre
solennelle où résonne, en un chant prophétique, l’annonce du
Messie. Son nom signifie Roi de Justice et il règne sur Salem, c’est-
à-dire sur la Paix. Dans la Genèse, il offre le pain et le vin pour
célébrer la victoire d’Abraham sur les infidèles. Étant « roi-prêtre du
Dieu très haut », il bénit le patriarche et il en reçoit la dîme.
David, au psaume 109, ayant vision du Sauveur dans la Lumière
incréée, le salue par ce cri : « Tu es prêtre à jamais selon l’ordre de
Melchisédec ! » Cela veut dire que Jésus inaugure le sacerdoce
éternel non d’après la Loi périmée mais par une institution directe
semblable à celle que reçut Melchisédec. De même, pour établir la
filiation divine de Jésus par l’exemple de Melchisédec, saint Paul
rappelle, dans l’Épître aux Hébreux, que ce dernier « est sans père
ni mère, sans généalogie, sans commencement de jours ni fin de vie
et qu’il préfigure ainsi le Fils de Dieu. »
Melchisédec fut donc plus qu’un homme. La tradition de l’Église
le considère comme un ange envoyé par Dieu pour confirmer la
vocation d’Abraham et préparer, en quelque sorte, par l’oblation du
pain et du vin, le sacrifice de la Messe.
Dans l’une de ses incomparables visions, Catherine Emmerich
rapporte que le calice dont Jésus se servit, pour la Cène, provenait
de Melchisédec. Je ne puis mieux faire que de la citer :
« Le grand calice de Jésus était déjà chez Abraham. Melchisédec
l’apporta du pays de Sémiramis, dans la terre de Chanaan, lorsqu’il
commença quelques établissements au lieu où fut plus tard
Jérusalem. Il s’en servit lors du sacrifice où il offrit le pain et le vin en
présence d’Abraham et il le laissa à ce patriarche. »
En une autre vision, ce qui suit lui fut représenté :
« Le sacrifice de Melchisédec eut lieu dans la vallée de Josaphat,
sur une hauteur. Abraham devait savoir d’avance qu’il viendrait
sacrifier car il avait élevé un autel et, au-dessus, un berceau de
feuillage. Il y avait aussi une espèce de tabernacle où Melchisédec
plaça le calice…
« Lorsque le patriarche avait reçu le mystère de la Promesse, il
lui avait été révélé que le prêtre du Très-Haut célébrerait devant lui
le sacrifice qui devait être institué par le Messie et durer
éternellement. C’est pourquoi lorsque Melchisédec fit annoncer son
arrivée par deux coureurs dont il se servait souvent, Abraham
l’attendit avec une crainte respectueuse…
« Il alla à la rencontre de Melchisédec. Je vis celui-ci entrer dans
le berceau de feuillage ; il offrit le pain et le vin, en les élevant dans
ses mains ; il les bénit et les distribua. Il y avait dans cette cérémonie
quelque chose de la sainte Messe. Abraham reçut un pain plus blanc
que les autres et but au calice qui servit, par la suite, à la Cène de
Jésus et qui n’avait pas encore de pied. Les plus distingués d’entre
les assistants distribuèrent après du pain et du vin au peuple qui
entourait l’autel…
« Il n’y eut pas de consécration : les anges ne peuvent pas
consacrer. Mais les oblations furent bénies et je les vis rayonner.
Tous ceux qui les reçurent furent fortifiés dans leur âme et dans leur
corps et élevés vers Dieu. Pour la bénédiction d’Abraham par
Melchisédec, je vis que c’était une préfigure de l’ordination des
prêtres…
« Melchisédec ne paraissait pas vieux. Il était svelte, haut de
taille ; ses gestes avaient une douce majesté. Il portait un long
vêtement plus blanc qu’aucun vêtement que j’aie jamais vu ; la
tunique blanche d’Abraham semblait grise à côté. Lors du sacrifice il
mit une ceinture où étaient brodés quelques caractères et une
coiffure blanche assez semblable à une mitre. Sa longue chevelure
était d’un blond clair et luisant ; on aurait dit de la soie. Il avait une
barbe blanche, courte et pointue. Son visage resplendissait.
« Tout le monde le traitait avec respect. Sa présence répandait
partout la vénération et un calme majestueux. Il me fut dit que c’était
un ange sacerdotal et un messager de Dieu. Il était envoyé pour
établir diverses institutions religieuses.
« Il conduisait des peuples, déplaçait et mêlait les races et fondait
aussi des villes. Je l’ai vu en plusieurs pays avant le temps
d’Abraham. Ensuite, je ne l’ai plus revu. »