Professional Documents
Culture Documents
Experiencing Intercultural Communication An Introduction 6th Edition Ebook PDF
Experiencing Intercultural Communication An Introduction 6th Edition Ebook PDF
8
Samples for Pretests
Pretesting Procedures
Conventional Pretests and Interviewer Debriefings
Postinterview Interviews
Behavior Coding
Cognitive Interviews
Respondent Debriefing
Expert Panel
Assessing Interviewer Tasks
Experimental Design for Question Evaluation
Revising and Retesting
Pilot Tests
Some Last Advice
For More In-Depth Reading
Methodology Appendix 3: Cognitive Interviewing Workshop
11. Data Collection II: Controlling Error in Data Collection
Measures of Survey Quality
Unit Nonresponse
Recent Increases in Nonresponse
Item Nonresponse
Balancing Survey Goals and Ethical Obligations to Participants
Controlling Error in Data Collection
Interviewer-Administered Surveys
Computer-Assisted Data Collection
Minimizing Item Nonresponse
Interviewer Effects
Self-Administered Surveys: Mail and Internet
Data Collection Costs and Contingencies: Planning for the Unexpected
For More In-Depth Reading
Methodology Appendix 4: An Overview of Organization Surveys
12. Postsurvey Statistical Adjustments and the Methodology Report
Nonresponse Bias
Data Adjustments: Weighting and Imputation
The Methodology Report
Surveys in Other National and Cultural Contexts
Supplemental Reading
Appendix A: University of Maryland Undergraduate Student Survey
Appendix B: Maryland Crime Survey
Appendix C: AAPOR Code of Professional Ethics
Appendix D: Internet Resources
References
Index
9
10
PREFACE
T
here were 9 years between the first (1996) and second (2005) editions of this book. Although fewer years have
elapsed since that last edition than between the first two, the revisions are substantially larger, reflecting the
evolution of a field driven by rapid technological and societal change, but also by the effect of advances in
survey methodology. In addition to the necessary updates, we have expanded the technical range of the book.
First, the chapters on sampling (Chapters 5 7) have been completely restructured and expanded. This revision
is consistent with the increased focus on survey error (Chapter 2) as a framework for thinking about each stage
of survey design and implementation. Second, we have added four Methodology Appendixes to address some
topics of special interest (Using Models in Sampling and An Overview of Organization Surveys) and to
provide in-depth workshop treatments on topics that are crucial to containing measurement error, often the
major threat to survey quality (Questionnaire Evaluation Workshop and Cognitive Interviewing Workshop).
In this edition, we have also taken a broader view of the needs of diverse researchers who may employ the
survey method. We use examples drawn from a wide range of social research—including sociological, political,
educational, public health, marketing, and business applications—and we consider issues faced by researchers
in varying contexts, including academic versus nonacademic research, surveys of organizations versus
households or individuals, and cross-cultural surveys.
The treatments of data collection (Chapters 4 and 11) and questionnaire development (Chapters 8–10)
reflect a wider range of methodologies (including online, cell phone, intercept, and multimode surveys), as
well as contributions to survey practice from recent survey methods research (e.g., the use of cognitive models
for understanding the response process and effective design of response categories and rating scales). Finally,
from Chapter 1 (Survey Practice) through Chapter 12 (Postsurvey Statistical Adjustments and the
Methodology Report), we have integrated the ethical treatment of research subjects into survey design,
questionnaire construction, and data collection.
The guidance of our editor, Vicki Knight, has been essential in this effort. The technical assistance
provided by Nadra Garas was critical, and the efficient support of Kalie Koscielak and Lyndsi Stephens of
SAGE Publications contributed greatly to the timely editing and processing of the manuscript. We appreciate
the extensive, helpful comments on a draft of the book provided by reviewers engaged by SAGE Publications:
Jennifer Reid Keene, University of Nevada, Las Vegas; Lisa House, University of Florida; Julio Borquez,
University of Michigan, Dearborn; Yasar Yesilcay, University of Florida; Chadwick L. Menning, Ball State
University; Brenda D. Phillips, Oklahoma State University; Bob Jeffery, Sheffield Hallam University; Michael
F. Cassidy, Marymount University; Young Ik Cho, University of Wisconsin, Milwaukee; Richard J. Harris,
University of Texas at San Antonio; Valentin Ekiaka Nzai, Texas A&M University, Kingsville; and Mary-
Kate Lizotte, Birmingham-Southern College.
In particular, we want to recognize our intellectual debt to the late Seymour Sudman, a seminal figure in
modern survey research, who was our teacher and mentor for many years, beginning when we were all, in
various capacities, at the University of Illinois Survey Research Laboratory. This book would not have been
possible without the guidance of his scholarship and his example as a researcher.
11
ABOUT THE AUTHORS
Johnny Blair is an independent consultant in survey methodology. Previously, he was a Principal Scientist at
Abt Associates and a manager of survey operations at the University of Maryland Survey Research Center and
the University of Illinois (Urbana-Champaign) Survey Research Laboratory. Over a 40-year career in survey
research, he has designed and/or implemented surveys for health (including HIV high-risk populations),
education (including large-scale student assessments), environment (including contingent valuation), and
criminal victimization (including proxy reporting) surveys, among other areas. He has conducted
methodological research on sampling rare populations, measurement error in proxy reporting, cognitive and
usability testing of computer-based student writing assessments, and data quality in converted refusal
interviews. He has been involved in a decade-long program of research on cognitive interview pretesting, most
recently on the theory of pretest sample size and the validation of pretest problem identification. He has been
a member of the editorial board of Public Opinion Quarterly, has served on several National Research Council
Panels, and has been a consultant to many federal agencies, academic organizations, law firms, and other
companies. Since 1996, he has served on the Design and Analysis Committee (DAC) for the National
Assessment of Educational Progress, NAEP, the Nation’s Report Card.
Ronald F. Czaja is Associate Professor Emeritus of Sociology and Anthropology at North Carolina State
University. He taught courses in both undergraduate and graduate research methodology and medical
sociology. His methodological research focused on sampling rare populations, response effects in surveys, and
the cognitive aspects of questionnaire design. From 1969 to 1990, he worked at the Survey Research
Laboratory, University of Illinois at Chicago, as project coordinator, co-head of sampling, assistant director,
and principal investigator.
Edward A. Blair is the Michael J. Cemo Professor of Marketing & Entrepreneurship and Chair of the
Department of Marketing & Entrepreneurship in the C. T. Bauer College of Business, University of
Houston. He has served as chair of the American Statistical Association Committee on Energy Statistics,
which advises the U.S. Energy Information Administration on the gathering, analysis, and dissemination of
information. He also has served on the U.S. Census Bureau Census Advisory Committee of Professional
Associations and as a National Science Foundation panelist in Innovation and Organizational Change. His
research has been published in journals such as the Journal of Marketing, the Journal of Marketing Research, the
Journal of Consumer Research, Public Opinion Quarterly, Sociological Methods and Research, the Journal of
Advertising Research, the Journal of the Academy of Marketing Science, the Journal of Retailing, and elsewhere. He
has served on the editorial boards of the Journal of Marketing Research, the Journal of the Academy of Marketing
Science, and the Journal of Business Research and as national conference chair for the American Marketing
Association (AMA). In addition to teaching university classes, Dr. Blair has taught in a variety of professional
programs, including the AMA’s School of Marketing Research (for research professionals) and Advanced
Research Techniques Forum.
12
ONE
SURVEY PRACTICE
I
t would be difficult to name another social science method that has so quickly and pervasively penetrated our
society as the sample survey. In fewer than two generations, the notion of relying on relatively small samples
to measure attitudes and behaviors has grown from a little-noted curiosity to the dominant data collection
practice. Surveys are used by academic researchers, governments, businesses, political parties, media, and
anyone who wants insights into what people are thinking and doing. Survey data underlie our knowledge
regarding
The requirements of these diverse survey applications have naturally spawned a wide range of performance
practices. How a survey is designed and implemented for a federal agency is vastly different than one for a
newspaper poll.
A large and rapidly expanding survey literature reflects both the numerous applications of survey data
collection and the inherently multidisciplinary nature of survey research. To successfully design and
implement a survey, we need to understand the basics of a few disciplines and techniques. At some points, we
can rely on scientific understanding and training; at others, we need a knowledge of accepted practices and,
throughout the process, a healthy dose of common sense.
13
WHAT IS A SURVEY?
Surveys collect information by interviewing a sample of respondents from a well-defined population. The
survey population may comprise individuals, households, organizations, or any element of interest. The
boundaries of the population may be defined by demographic characteristics (e.g., persons 18 years of age or
older), geographic boundaries (residing in Maryland), behaviors (who voted in the last election), intentions
(and intend to vote in the next election), or other characteristics. The population should be defined so that its
members can be unequivocally identified. In addition, we must be convinced that the majority of respondents
will know the information we ask them to provide. It makes little sense to ask people questions, such as the
net worth of their family that many in the targeted population, maybe most, will not be able to answer.
Surveys can be conducted in person, by phone, by mail, or over the Internet, among other methods. The
identifying characteristic of a survey interview is the use of a fixed questionnaire with prespecified questions.
The questions are most often, but not always, in a closed format in which a set of response alternatives is
specified. Using a fixed questionnaire allows a researcher to control the interview without being present,
which allows the interview to be conducted at relatively low cost, either through self-administration by
respondents (as in Internet or mail surveys) or through administration by interviewers who, although trained,
are typically paid at a modest rate. The resulting data are then entered into a data file for statistical analysis.
Surveys are, of course, not the only method used by social researchers to gather data. Alternatives include
observation, depth interviews, focus groups, panels, and experiments. Key points of comparison between
surveys and other methods, as well as examples of how other methods are sometimes used to support survey
design, are as follows:
• As the term suggests, we gather observational data by observing events rather than by asking questions.
Observation can capture information about inanimate phenomena that can’t be questioned directly, and
observation doesn’t suffer from respondents misunderstanding the question, forgetting what happened, or
distorting their answers to make a good impression. However, for observational data to be feasible, the
phenomenon of interest must be observable; mental states such as attitudes or intentions are out. After an
election, we can observe how precincts voted, but before the election, we cannot observe how they intend to
vote. We can observe how precincts voted, but not why they voted that way. It also may not be cost-effective
to gather observational data. We can observe how someone uses public transportation by following that person
for a month, but it is much less costly to ask him or her about last month’s behavior.
In some situations, observation may be used to learn more about a particular population while developing
plans for a survey. For example, in preparing for a survey of parents about their children’s dietary habits, we
observed kids at a few school lunches. Behaviors such as discarding, sharing, or exchanging foods between
children led to some improvements in the parent survey. More important, it showed that some of the
children’s eating could not be fully reported by the parents.
• Depth interviews, like surveys, gather data through questioning. However, depth interviews do not use a
fixed questionnaire. The interviewer usually has a list of topics to be covered and may use fixed questions to
get respondents started on these topics, but the overall goal is to let respondents express their thoughts freely
and to probe as needed. This approach is good for getting deep, detailed, complex information that doesn’t
work well in a survey. However, these interviews usually must be administered in person, and a highly skilled
interviewer is needed to manage the unstructured interaction, resulting in much higher cost than a survey
interview and consequently less ability to interview a broad sample. So, for example, if you want to know how
top officials in city government interact and make decisions, then you might do depth interviews with a small
number of city leaders, but if you want to know how the broad electorate rates the performance of city
government using standard questions, a survey will be more useful.
In planning a survey, depth interviews can be used to uncover issues that are important to include in the
questionnaire, learn how potential respondents think about the topic, develop response alternatives, and learn
how the population might react to survey procedures (such as those for gaining cooperation). This can be
especially useful when the population and/or topics are not familiar to the researcher.
• Focus groups, like depth interviews, do not use a fixed questionnaire. Unlike depth interviews, which are
conducted one-on-one, focus groups facilitate interaction among group participants. This interaction can
inform the researcher about reasons for differences in a population on an issue, or other social dynamics. As
14
with depth interviews, focus groups are sometimes used in the survey planning process to uncover issues that
are important to include in the questionnaire, learn how potential respondents think about the topic, develop
response alternatives, and learn how the population might react to survey procedures.
• Panels are groups of research participants that provide information over time. This information can be
collected through observation or self-reports, including diaries where people record their behavior over time or
repeated surveys of the same respondents. The greatest strength of panels is that measures of change over time
are precise and not subject to variation due to shifting samples of respondents. The greatest weakness of panel
research is the fact that many people are not willing to accept the commitment required by a panel, while
many others drop out after a short stay, so it is difficult to keep panels representative of the population. Also,
panels tend to be more expensive than most onetime surveys because panel members must be given some
reward for their ongoing effort, while most surveys do not reward respondents. Therefore, if a researcher
wants to study how political preferences evolve through a campaign, she might recruit a panel of registered
voters and track them, but if her focus is on accurate estimates of candidate preference at any given point in
time, she is more likely to use a series of independent surveys.
• An experiment is a study in which the researcher actively manipulates one or more experimental variables,
then measures the effects of these manipulations on a dependent variable of interest, which can be measured
by either observation or self-report. For example, a researcher interested in knowing which of two
advertisements has a stronger effect on willingness to buy a product could conduct a survey that measures
respondents’ awareness of each ad and willingness to buy, and correlate willingness to buy with awareness of
each ad. Alternately, the researcher could show participants either one ad or the other, with random
assignment, and measure subsequent willingness to buy. The goal of experimentation is to verify that the
observed relationships are causal, not just correlational.
In survey research, experiments have proved to be powerful tools for studying effects of survey question
wording on response. Much of what we know about how to write good survey questions is a product of
experimental research, in which alternative versions of questions intended to measure the same construct are
compared. For example, through experimentation, researchers learned that asking whether respondents
thought a certain behavior should be “allowed” or “forbidden” can produce different response distributions,
even when the alternative wordings logically mean the same thing. There is a very large literature on this
response effects research. Classic works include Sudman and Bradburn (1974), Schuman and Presser (1981),
and Tourangeau, Rips, and Rasinski (2000).
Exhibit 1.1 summarizes how surveys relate to other data collection methods.
Observation Not subject to reporting bias Can’t measure mental states; not efficient for
measuring infrequent behaviors
Depth Can probe freely and go into depth Expensive, poor population coverage
interviews
Focus Can probe freely and go into depth; can Expensive, poor population coverage
groups see social dynamics
Panels Shows changes over time Expensive; a limited number of people will
participate
15
16
THE COMBINATION OF DISCIPLINES
Survey research is inherently interdisciplinary. Sampling and estimation have a theoretical basis in probability
theory and statistics; to select an efficient sample requires some knowledge of those areas. Data collection
involves persuasion of respondents and then, on some level, social interaction between them and interviewers.
Developing questionnaires and conducting interviews require writing skills to construct questions that elicit
desired information using language that respondents can easily understand and do not find too difficult to
answer. Interviews or questionnaires that use computers or the Internet require programming or other
specialized skills. Very few survey professionals have hands-on expertise in all of these areas, but they do have
a basic understanding of what needs to be done to successfully implement each part of a survey.
Unlike some scientific or scholarly enterprises, surveys are usually a team effort of many people with diverse
skills. One can find examples of surveys designed and implemented by the lone researcher, but they are the
exception. Even if the researcher who formulates the research questions also designs the questionnaire and
analyzes the data, that person will almost always use help in sample design, data collection, and database
construction. Whether a survey is done by a research organization or as a class project, there is division of
labor, coordination of tasks, and management of the costs.
To design and implement a quality survey within available resources, the practitioner relies on a relatively
small number of statistical principles and practical guidelines. The goal of this book is to explain those
fundamentals and illustrate how they are applied to effectively conduct small- to moderate-scale surveys.
17
THE SOCIAL AND SOCIETAL CONTEXT
Survey practices and methodology change as our knowledge grows and our experience increases. And, of
course, just as in every other field, changes in technology affect survey design and implementation. It is also
important to be sensitive to the impact of societal, demographic, and cultural changes on survey practice.
For example, 40 years ago, most national surveys in the United States were conducted only in English. The
proportion of the population that was fluent only in some other language was considered small enough that
the cost of interviewing in those languages could not be justified. The decrease in coverage was too small to be
of concern. Today, any important national survey allows for, at a minimum, Spanish-language interviews and
often interviews in additional languages. This allows inclusion of both people who do not speak English at all
and those who, although they can communicate in English, are much more at ease in their first language.
Likewise, many states, or smaller areas, have large enclaves of non-English-language groups. The California
Health Interview Survey is conducted in several Asian languages, partly for reasons of coverage of the entire
state population, but also because of the need to sample enough people in some of these groups for their
separate analysis.
The need to accommodate different languages is just part of a larger imperative to be aware of different
cultural norms. A more detailed discussion is beyond the scope of this book, but this is just one example of
how changes within society affect how we go about conducting surveys.
General social norms can also have important effects. People’s willingness to allow a stranger into their
homes has greatly changed since the middle of the last century when in-person household interviews were the
norm. Such surveys are still conducted, but the costs and procedures necessary to make them successful have
limited the practice to only the most well-funded programs. Similarly, in recent decades, the rise of
telemarketing and telephone screening devices has affected researchers’ success in contacting members of the
general population by telephone and, once contacted, securing their participation in telephone surveys. The
rise of cell phones and the Internet continues the accelerating technological change that can be a benefit or an
obstacle to conducting surveys. One indicator of the impact of these factors on survey practice can be seen in
the shifting proportions of surveys administered by mail, phone, web, or in person, as shown in Figure 1.1.
Societal changes can occur at different rates in different parts of the population. Some new technologies, for
example, may be initially adopted more heavily by younger than older people or by the more affluent than less
affluent. Such patterns can become fixed or be only a step along the way to wider diffusion through the
population. How important it is for a survey designer to take account of some technical development or
changing social norm may depend on what population the survey targets.
18
Source: Adapted from information in the newsletter Survey Research, a publication of the Survey Research Laboratory, University of Illinois,
Chicago and Urbana-Champaign.
19
ETHICAL TREATMENT OF SAMPLE MEMBERS
A subtext in the discussion of the societal context of a survey is that the survey designer must accommodate
the potential survey respondents. Respondents are volunteers we depend on. They are not obligated to
participate, but the extent to which they agree to participate affects the survey’s success, its cost, and, in some
ways, its quality. Later, we will consider in detail methods to elicit the cooperation of sample members.
However, the methods we can use are not just determined by their effectiveness. There are ethical boundaries
we cannot cross. There are two concepts central to our treatment of respondents: informed consent and
protection of confidentiality.
While we apply extensive efforts to obtain respondents’ cooperation in the survey, the respondents’
agreement must be reasonably informed. This means that we must not mislead respondents as to the nature
and purpose of the research. We must honestly answer their questions about the project, including who is
sponsoring it, its major purposes, the amount of time and effort that will be required of respondents, the
general nature of the subject matter, and the use that will be made of the data. We must not badger or try to
intimidate respondents either into participating or into answering particular questions after they agree to be
interviewed.
Once respondents have agreed to be interviewed, we then assume an obligation to protect the
confidentiality of their answers. This is true whether or not we have explicitly told respondents we will do so.
Results or data sets that permit the identification of individual respondents should never be made available to
others.
These ethical guidelines are recognized by the major professional organizations of survey researchers and
are typically overseen by human subjects review committees at universities and other organizations that engage
in population research.
These obligations are no less applicable when a project is conducted by a class or a large team of researchers
than when a single researcher is involved. In fact, additional cautions may need to be observed in the former
situation because there are additional opportunities for inadvertent breaches of these ethical guidelines when
many people are privy to the sample and the data.
Revealing or discussing an individual respondent’s answers outside of the research group is inappropriate.
Also, it is not proper to recontact survey respondents for purposes not related to the research for which they
originally agreed to participate. The sample list used for a survey should not be made available to others (even
other legitimate researchers) without the additional consent of the respondents. If the data are made available
to another party, all identifiers that would permit linking answers to individuals should be removed.
20
APPROACH AND OVERVIEW
The main concern in designing and conducting a survey is to achieve the research or other data collection
objectives within available resources. Sometimes the initial objectives may be adjusted along the way to
accommodate resource constraints or practical obstacles, but we must not lose sight of them. For example,
assume that during planning, we determine that the budget is insufficient for both the preferred sample size
and number of survey questions. In such a circumstance, it may be necessary either to reduce the sample size
and forego separate analyses of some subgroups that will not be possible with fewer respondents or to reduce
the length of the interview and eliminate topics of secondary interest. But we still proceed toward a clear set of
objectives. Along with those objectives, we need a sense of how good the survey needs to be for our purposes.
“Good” can be specified in a number of ways, which we will return to. Many conceptions of survey quality
involve the idea of accuracy or, conversely, of error. In the next chapter, we consider the types of error surveys
are subject to. Following this, we provide an overview of survey planning and implementation, from
determining the main survey methods to the decisions and tasks at each stage of carrying out the survey. The
means of collecting the data is, along with sample size, the main factor in survey cost. Factors in choosing a
method of data collection are covered in the next chapter. Three chapters each are devoted to sampling and
then to questionnaire development. Following these topics is a chapter on conducting data collection while
controlling the main sources of error in that stage of the survey. The last chapter describes the key post–data
collection activities, including preparing a report of the survey methodology.
Two Methodology Appendixes deal with topics of particular importance, at a greater level of detail:
Questionnaire Evaluation Workshop and Cognitive Interviewing Workshop. Two other Methodology
Appendixes focus on Using Models in Sampling and An Overview of Organization Surveys. Throughout the
book, we emphasize the value of resources available on the Internet. Appendix D lists several sources that are
of particular importance themselves or that provide links to additional information.
21
TWO
SURVEY ERROR
S
urvey researchers use the term error to refer to “deviations of obtained survey results from those that are true
reflections of the population” (Groves, 1989 p. 6). In this chapter, we provide a framework for understanding:
Survey findings are essentially probabilistic generalizations. The sampling process provides a statistical basis
for making inferences from the survey sample to the population. The implementation of the sampling and
data collection are subject to error that can erode the strength of those inferences.
We can visualize how this erosion happens if we imagine the perfect sample survey. The survey design and
questionnaire satisfy all the research goals. A list is available that includes accurate information about every
population member for sampling purposes. The selected sample precisely mirrors all facets of the population
and its myriad subgroups. Each question in the instrument is absolutely clear and captures the dimension of
interest exactly. Every person selected for the sample is contacted and immediately agrees to participate in the
study. The interviewers conduct the interview flawlessly and never—by their behavior or even their mere
presence—affect respondents’ answers. The respondents understand every question exactly as the researcher
intended, know all the requested information, and always answer truthfully and completely. Their responses
are faithfully recorded and entered, without error, into a computer file. The resulting data set is a model of
validity and reliability.
Except for trivial examples, we cannot find such a paragon. Each step in conducting a survey has the
potential to move us away from this ideal, sometimes a little, sometimes a great deal. Just as all the processes
and players in our survey can contribute to obtaining accurate information about the target population, so can
each reduce that accuracy. We speak of these potential reductions in accuracy as sources of survey error.1
Every survey contains survey errors, most of which cannot be totally eliminated within the limits of our
resources.
To design and implement an effective survey, we must recognize which sources of error pose the greatest
threat to survey quality and how the design and implementation of the survey will affect its exposure to various
sources of error. The critical sources of error will differ depending on the survey objectives, topic, population,
and its methods of sampling and data collection.
22
TYPES AND SOURCES OF ERROR
When we work with survey data, we typically want our data to provide an accurate representation of the
broader population. For example, assume that your friend is running for election to the local school district
board, and you agree to do a survey to help her learn what people think about the issues. If your data show
that 64% of the respondents want the school district to place more emphasis on reading skills, you want this
64% figure to be an accurate reflection of feelings among the population at large. There are three reasons why
it might not be:
Sampling Error
Sampling error refers to the fact that samples don’t always reflect a population’s true characteristics, even if
the samples are drawn using fair procedures. For example, if you flip a coin 10 times, then 10 more times,
then 10 more times, and so on, you won’t always get five “heads” out of 10 flips. The percentage of “heads” in
any given sample will be affected by chance variation. In the same way, if you ask 100 people whether the
school district should cut other programs to place more emphasis on reading, the percentage that answer
affirmatively will be affected by chance variation in the composition of the sample.
The level of sampling error is controlled by sample size. Ultimately, if the sample encompasses the entire
population (i.e., the sample is a complete census of the population), then sampling error would be zero,
because the sample equals the population. More generally, as samples get larger and larger, the distribution of
possible sample outcomes gets tighter and tighter around the true population figure, as long as the samples are
not biased in some way. To put it another way, larger samples have less chance of producing results that are
uncharacteristic of the population as a whole. We expand upon these points in Chapters 5, 6, and 7 and show
how to set sample size based on acceptable levels of sampling error.
Sample Bias
Sample bias refers to the possibility that members of a sample differ from the larger population in some
systematic fashion. For example, if you are conducting a survey related to a school bond election, and you limit
interviews to people with school children, then your results may not reflect opinions among voters at large; if
nothing else, people with school children may be more likely to support bonds to pay for school
improvements. Similarly, if your sample contains a disproportionately large percentage of older people because
you didn’t make the callbacks needed to find younger people, or if your sampling procedure underrepresents
married versus single people, then you are likely to have sample bias.
Sample bias can arise in three general ways. Coverage bias will occur if some segment of the population is
improperly excluded from consideration in the sample or is not available through the method employed in the
research. Conducting interviews near school grounds with parents as they arrive to pick up their children and
implicitly limiting the sample to people with school children is an example of potential coverage bias if the
population of interest is the broader electorate. Likewise, conducting a telephone survey implicitly limits the
sample to people with telephones, and conducting an online survey implicitly limits the sample to people with
online access, either of which will exclude some low-income households, resulting in potential coverage bias if
these households differ from the broader population on the items being measured.
Selection bias will occur if some population groups are given disproportionately high or low chances of
selection. For example, if you conduct telephone interviews using conventional household landlines, and you
take the first adult to answer the phone, then married people will only have one chance in two of being
selected within their household, while single people have one chance in one, and married people will thus tend
to be underrepresented in a sample of possible voters.
Finally, even if the sample is fairly drawn, nonresponse bias will occur if failure to respond is
disproportionate across groups. For example, if you send e-mails inviting people to click through to a survey
on issues related to the local schools, people who are less interested in those issues are probably less likely to
23
respond. If one of the topics on the survey concerns willingness to pay higher taxes to support the schools,
there may be substantial differences between respondents and nonrespondents on this topic, resulting in
nonresponse bias.
Unlike sampling error, sample bias is not controlled by sample size. Increasing the sample size does nothing
to remove systematic biases. Rather, sample bias is controlled by defining the population of interest prior to
drawing the sample, attempting to maximize population coverage, selecting a sample that fairly represents the
entire population, and obtaining data from as much of the selected sample as possible.
Nonsampling Error
Nonsampling error consists of all error sources unrelated to the sampling of respondents. Sources of
nonsampling error include interviewer error related to the administration of the survey, response error related
to the accuracy of response as given, and coding error related to the accuracy of response as recorded. All of
these may result in either random or systematic errors in the data. If random, they will add to random
sampling error to increase the level of random variation in the data. If systematic, they will be a source of bias
in the data, much like sample bias.
One source of interviewer error is cheating; the interviewer fails to administer the questionnaire, or portions
of the questionnaire, and simply fabricates the data. Cheating is controlled through some form of validation,
in which a supervisor or a third party verifies that the interview was conducted and key questions were asked.
This is typically done by telephone; when the interview is conducted, whether face-to-face or by phone, the
respondent is asked for a telephone number and told that a supervisor may call to verify the interview for
quality control purposes. Typically, 10% to 20% of interviews are validated. For surveys that are done on
location, validation may be done on site by a supervisor. For telephone surveys that are conducted in
centralized telephone interviewing facilities, validation may be replaced by monitoring interviews as they are
conducted, so the interviewer never knows when the supervisor will be listening in.
Interviewer error also may stem from question administration error, in which the interviewer does
something other than read the question as intended, or probing error. Regarding errors in question
administration, the interviewer may skip a question that should not be skipped, ask a question that should be
skipped, omit part of the question, misread the question, add to the question, or add some vocal inflection or
body language that conveys a point of view. Any of these actions, of course, may influence the response. With
respect to probing, if the respondent expresses confusion about the question or gives an inadequate answer,
interviewers are usually instructed to give neutral probes that do not presume an answer or convey a point of
view; for example:
• If the respondent says he or she did not follow the question, simply reread it.
• If the respondent asks about the meaning of some term, read a preset definition or say “whatever it means
to you.”
• If the respondent gives an answer to an open-ended question that is insufficiently specific, ask, “Can you
be more specific?” or “When you say X, what do you mean?”
• If the respondent gives an answer to a closed-ended question that does not match the categories, ask,
“Would you say [reread categories]?”
If the interviewer fails to probe an answer that should be probed, probes in a manner that changes the
question, presumes a certain answer, or leads the respondent toward a certain answer, then any of these
actions may influence the quality of response.
Question administration error and probing error are controlled through interviewer training. When first
hired, interviewers receive training that covers general principles such as “ask the question as written” and
when and how to probe. Subsequently, interviewers are given specific instructions and training for any survey
that they work on. This training normally includes at least one practice interview conducted under a
supervisor’s direction. The better the training, the lower the exposure to errors of this type. Also, data
collection may be spread across multiple interviewers, so any idiosyncrasies among interviewers manifest
themselves as variance across interviewers rather than bias from a single data source.
The next general source of nonsampling error is response error. Response error may occur for at least three
different reasons. Comprehension error will occur if the respondent does not understand a question correctly
or if different respondents understand it in different ways. For example, in a survey of school district issues, if
24
you ask respondents what priority they would give to programs for the development of reading skills, will they
know what you mean by programs and skills? Will some of them think about instruction for preschoolers while
others think about older students with reading deficiencies? Will this affect their answers? Comprehension
errors often stem from questions that are poorly thought out or poorly written, and they are controlled
through careful question design as well as pretesting the questionnaire to ensure that respondents understand
the questions as intended.
Knowledge error may occur if respondents do not know the answer to the question, or cannot recall it
accurately, but still provide an answer to the best of their ability. For example, if you ask parents how often
their children fail to eat all of their lunches at school, will parents know the answer? If you ask parents how
many hours per evening their children have spent on homework over the past month, will they be able to
recall accurately over this time frame? As with other forms of nonsampling error, knowledge errors may be
either random or systematic and hence a source of either increased variance or bias in the data. These errors
are generally controlled by getting the right respondent, screening for knowledge if appropriate, writing
questions that are realistic about what people might know, and matching the time frame of the question to a
plausible recall period for the phenomenon in question.
Reporting error may occur if respondents hesitate to provide accurate answers, perhaps because certain
responses are socially desirable or undesirable, or perhaps because respondents wish to maintain privacy. For
example, if you ask parents to report their children’s grades, they may be tempted to give answers that are
higher than the truth. If you ask parents to report how often their children have been given disciplinary
actions at school, they may underreport. Reporting errors may be random but often are a source of systematic
bias for sensitive questions These errors are controlled in a variety of ways, including professionalism in the
survey to convince respondents that you care whether they answer but not what they answer, or using self-
report methods so the respondent does not have to present himself or herself to an interviewer.
The final source of nonsampling error is coding error. For open-ended questions, the coder may
misinterpret the answer and assign the wrong numerical code to be entered in the data. For categorical
questions, the interviewer (or respondent) may mark the wrong category. Also, for paper questionnaires,
interviewers or coders may select the correct code or category, but at the data entry stage, someone may enter
data incorrectly. To control such errors, open-ended questions may be assigned to at least two coders to
identify and resolve inconsistencies. Likewise, data may be double-entered to identify and resolve
inconsistencies. Also, questionnaires are edited to verify that the answers appear to be correctly recorded and
to check for interitem consistency where applicable (this process is automated for computer-based
questionnaires). In some cases, respondents may be recontacted to resolve apparent errors or inconsistencies.
Overall, nonsampling error is controlled, to the extent possible, by doing a good job of training and
supervising interviewers, using good questionnaire design, and exercising good control over coding and data
entry.
25
Another random document with
no related content on Scribd:
yet found in that country. In the same temple we found also
statues of sacred animals and pottery which we now know to
belong to the very beginning of Egyptian history, many
centuries before the pyramids, and probably about 5000 B. C.
or earlier.
"The next step was the finding of a new cemetery and a town of
the prehistoric people, which we can now date to about 5000 B.
C., within two or three centuries either way. This place lay
on the opposite side of the Nile to Koptos—that is to say,
about 20 miles north of Thebes. At first we were completely
staggered by a class of objects entirely different from any
yet known in Egypt. We tried to fit them into every gap in
Egyptian history, but found that it was impossible to put them
before 3000 B. C. Later discoveries prove that they are really
as old as 5000 B. C. They show a very different civilization
from that of the Egyptians whom we already know—far less
artistic, but in some respects even more skillful in
mechanical taste and touch than the historical Egyptians. They
built brick houses to live in, and buried their dead in small
chambers sunk in the gravels of the water courses, lined with
mats, and roofed over with beams. They show several points of
contact with the early Mediterranean civilization, and appear
to have been mainly north African tribes of European type.
Their pottery, in its patterns and painting, shows designs
which have survived almost unchanged unto the present day
among the Kabyles of the Algerian Mountains. And one very
peculiar type of pottery is found spread from Spain to Egypt,
and indicates a widespread commercial intercourse at that
remote day. The frequent figures upon the vases of great
galley ships rowed with oars show that shipping was well
developed then, and make the evidences of trading between
different countries easy to be accepted.
{19}
"We now pass entirely from these early times, with their
fascinating insight into the beginnings of things, long before
any other human history that we possess, until we reach down
to what seems quite modern times in the record of Egypt, where
it comes into contact with the Old Testament history. On
clearing out the funereal temple of King Merenptah I found in
that the upper half of a fine colossal statue of his, with all
the colors still fresh upon it. As this son of Rameses the
Great is generally believed to be the Pharaoh of the exodus,
such a fine portrait of him is full of interest. Better even
than that—I found an immense tablet of black granite over 10
feet high and 5 feet wide. It had been erected over two
centuries before and brilliantly carved by an earlier king,
whose temple was destroyed for materials by Merenptah. He took
this splendid block and turned its face inward against the
wall of his temple and carved the back of it with other scenes
and long inscriptions. Most of it is occupied with the history
of his vanquishing the Libyans, or North African tribes, who
were then invading Egypt. But at the end he recounts his
conquests in Syria, among which occurs the priceless passage:
'The people of Israel are spoiled; they have no seed.' This is
the only trace yet found in Egypt of the existence of the
Israelites, the only mention of the name, and it is several
centuries earlier than the references to the Israelite and
Jewish kings in the cuneiform inscriptions of Assyria. "What
relation this has to our biblical knowledge of the Israelites
is a wide question, that has several possible answers. Without
entering on all the openings, I may here state what seems to
me to be the most probable connection of all the events,
though I am quite aware that fresh discoveries might easily
alter our views. It seems that either all the Israelites did
not go into Egypt or else a part returned and lived in the
north of Palestine before the exodus that we know, because we
here find Merenptah defeating Israelites at about 1200 B. C.
Of his conquest and of those of Rameses III in Palestine there
are no traces in the biblical accounts, the absence of which
indicates that the entry into Canaan took place after 1160 B.
C., the last war of Rameses III. Then the period of the Judges
is given in a triple record—(l) of the north, (2) of the east
of the Jordan, (3) of Ephraim and the west; and these three
accounts are quite distinct and never overlap, though the
history passes in succession from one to another. Thus the
whole age of Judges is but little over a century. And to this
agree the priestly genealogies stretching between the
tabernacle and temple periods.
"Leaving now all the monumental age, we come lastly to the
evidences of the Christian period, preserved in the papyri or
miscellaneous waste papers left behind in the towns of the
Roman times. Last winter my friends, Mr. Grenfell and Mr.
Hunt, cleared out the remains of the town Behnesa, about 110
miles south of Cairo. There, amid thousands of stray papers,
documents, rolls, accounts, and all the waste sweepings out of
the city offices, they found two leaves which are priceless in
Christian literature—the leaf of Logia, or sayings of Jesus,
and the leaf of Matthew's Gospel. The leaf of the Logia is
already so widely known that it is needless for me to describe
it. … The leaf of Matthew's Gospel is of great interest in the
literary history of the Gospels. Hitherto we have had no
manuscripts older than the second great ecclesiastical
settlement under Theodosius. Now we have a piece two ages
earlier—before the first settlement of things under
Constantine at the council of Nicea. Here, in the middle of
the third century, we find that the beginning of the Gospel,
the most artificial, and probably the latest, part, the
introductory genealogy and account of the Nativity, was
exactly in its present form. This gives us the greatest
confidence that the Gospel as we have it dates from the time
of the great persecutions. Such are some of the astonishing
and far-reaching results that Egypt has given us within three
years past."
W. M. Flinders Petrie,
Recent Research in Egypt
(Sunday School Times, February 19, 1898.)
{20}
"We can at least say that the pottery of the early kings is
clearly derived from the later prehistoric types, and that
much of the civilization was in common. But it is clear that
the second prehistoric civilization was degrading and losing
its artistic taste for fine work before the new wave of the
dynastic or historic Egyptians came in upon it. These early
historic people are mainly known by the remains of the tombs
of the early kings, found by M. Amelineau at Abydos
(1896-1899), and probably the first stage of the same race is
seen in the rude colossi of the god Min, which I found at
Koptos (1894). … In these great discoveries of the last few
years we can trace at least three successive peoples, and see
the gradual rise of the arts, from the man who was buried in
his goat skins, with one plain cup by him, up to the king who
built great monuments and was surrounded by most sumptuous
handiwork. We see the rise of the art of exquisite flint
flaking, and the decline of that as copper came more commonly
into use. We see at first the use of signs, later on disused
by a second race, and then superseded by the elaborate
hieroglyph system of the dynastic race. …
W. M. Flinders Petrie,
Recent Years of Egyptian Exploration
(Appleton's Popular Science Monthly, April, 1900).
"… and then shalt thou see clearly to cast out the mote that
is in thy brother's eye."
"Jesus saith, A city built upon the top of a high hill, and
stablished, can neither fall nor be hid."
{22}
ARCHÆOLOGICAL RESEARCH:Egypt:
New discoveries in the tombs of the Valley of the Kings.
In the Valley of the Kings, which extends along the west bank
of the Nile, in the Libyan Mountains, opposite Luxor, M.
Loret, director of the Egyptian explorations, discovered in
1898 the tombs of Thutmosis III. and Amenophis II., and in the
following year made the more important discovery of the tomb of
Thutmosis I., "the real founder of the eighteenth dynasty, who
made Egypt one of the great empires of the ancient world."
Professor Steindorf of the University of Leipsic, writing of
this discovery to Professor Hilprecht of the University of
Pennsylvania, remarked that its special importance "lies in
the fact that Thutmosis I, the earliest king of the eighteenth
dynasty, was also the first ruler to depart from the ancient
custom of the Pharaohs, that of building in the desert lowland
pyramidal tombs. For himself he had a tomb hewn out of rock in
the mountains. His predecessor, Amenophis I, according to
custom, built his tomb in the plain, near the present site of
Drah-abul-negge, as we know from written records. Thutmosis I,
on the contrary, chose for his last dwelling-place the lonely
and majestic valley in the Libyan Mountains. For centuries the
Pharaohs followed his example, and during the eighteenth,
nineteenth, and twentieth dynasties were built those
magnificent sepulchers which in Roman times were still among
the greatest curiosities of ancient Thebes. …
"Chief among the articles that Mr. Loret found in the tomb is
a remarkably well-preserved papyrus containing texts from the
Book of the Dead, with colored pictures finely executed; also
a chest in which were kept a draught-board, with a full set of
draughtmen, and some garlands; likewise fruit, food, poultry,
and beef. The last-mentioned articles, being intended for the
sustenance of the dead, each one was wrapped in linen and
enclosed in a wooden case, exactly corresponding to its form.
Thirteen large earthen beer jars, most of which, with their
seals, stood there unmarred, and a large number of other
vessels, had contained the beverages necessary for the
refreshment of the dead. Weapons, among others two
artistically wrought leathern quivers containing arrows, and
two beautiful armchairs, completed this strange stock of
equipments. The most remarkable piece of all is a large and
beautifully preserved couch, the like of which has never been
found in any other tomb. It consists of a quadrangular wooden
frame, overspread with a thick rush mat, and over this were
stretched three layers of linen with a life-size figure of the
god of death, Osiris, drawn upon the outer layer. The figure
itself was smeared with some material intended to make the
under layer waterproof. Over this, mingled with some adhesive
substance, soil had been spread, in which barley was planted.
The grains had sprouted, and had grown to the height of from
two and a half to three inches. The whole, therefore,
represented a couch whereon the dead Osiris lay figured in
greensward. Verily, a striking poetical idea, the resurrection
of the dead symbolized by the picture of the barley springing
up. The whole tomb, with its numerous equipments, furnishes a
very important contribution to the history of the methods of
burial among the ancient Egyptians."
{23}
"The discoveries made at Knossos throw into the shade all the
other exploratory campaigns of last season in the Eastern
Mediterranean, by whatever nationality conducted. It is not
too much to say that the materials already gathered have
revolutionized our knowledge of prehistoric Greece, and that
to find even an approach to the results obtained we must go
back to Schliemann's great discovery of the Royal tombs at
Mycenae. The prehistoric site, of which some two acres have
now been uncovered at Knossos, proves to contain a palace
beside which those of Tiryns and Mycenae sink into
insignificance. By an unhoped-for piece of good fortune the
site, though in the immediate neighbourhood of the greatest
civic centres of the island in ancient, medieval, and modern
times, had remained practically untouched for over 3,000
years. At but a very slight depth below the surface of the
ground the spade has uncovered great courts and corridors,
propylaea, a long succession of magazines containing gigantic
store jars that might have hidden the Forty Thieves, and a
multiplicity of chambers, pre-eminent among which is the
actual throne-room and council-chamber of Homeric kings. The
throne itself, on which (if so much faith be permitted to us)
Minos may have declared the law, is carved out of alabaster,
once brilliant with coloured designs and relieved with curious
tracery and crocketed arcading which is wholly unique in
ancient art and exhibits a strange anticipation of 13th
century Gothic. In the throne-room, the western entrance
gallery, and elsewhere, partly still adhering to the walls,
partly in detached pieces on the floors, was a series of
fresco paintings, excelling any known examples of the art in
Mycenaean Greece. A beautiful life-size painting of a youth,
with a European and almost classically Greek profile, gives us
the first real knowledge of the race who produced this
mysterious early civilization. Other frescoes introduce us to
a lively and hitherto unknown miniature style, representing,
among other subjects, groups of women engaged in animated
conversation in the courts and on the balconies of the Palace.
The monuments of the sculptor's art are equally striking. It
may be sufficient to mention here a marble fountain in the
shape of a lioness's head with enamelled eyes, fragments of a
frieze with beautifully cut rosettes, superior in its kind to
anything known from Mycenae; an alabaster vase
naturalistically copied from a Triton shell; a porphyry lamp
with graceful foliation supported on an Egyptianising lotus
column. The head and parts of the body of a magnificent
painted relief of a bull in gesso duro are unsurpassed for
vitality and strength.
{25}
D. G. Hogarth,
Authority and Archaeology Sacred and Profane,
part 2, pages 237-238
(New York: Charles Scribner's Sons).
A. L. Frothingham, Jr.,
Archæological Progress
(International Monthly, December, 1900).
"We can show what the earliest Rome was, the Rome of Romulus
on the Palatine, and how it grew to be the City of the Seven
Hills. The City itself, crowded with the wrecks of twenty-five
centuries, preserves to-day few memorials of its earliest age;
but excavations made on two sites, one close to Rome, one a
little further north in Etruria, explain the process very
clearly. The traveller who approaches Rome by the Via Salaria
sees, just where Tiber and Anio join, a modern fort on an
isolated rock. Here was Antemnae, destroyed (according to
legend) by Roman jealousy very soon after Rome itself was
founded. The legend seems to be true, at least in substance.
On this hilltop excavations have shown a little village within
a wall of stone: it had its temple and senate-house, its
water-cistern, and square huts, thatched or timbered, for
dwelling-houses. The relics found there shew that the site was
abandoned, never to be again inhabited, about the time at
which the legend fixes the fall of Antemnae. Here we have
Rome's earliest rival. From the rival we may guess what the
earliest Rome was like on the Palatine rock, and what all the
little Italian towns were in their infancy."
F. Haverfield,
Authority and Archaeology Sacred and Profane,
part 2, pages 302-303
(New York: Charles Scribner's Sons).
ARCHÆOLOGICAL RESEARCH: Italy:
The Etruscans.
F. Haverfield,
Authority and Archaeology Sacred and Profane,
part 2, page 305
(New York: Charles Scribner's Sons).
{26}
ARMENIA: A. D. 1895-1899.
Revolt against Turkish oppression.
Massacres and atrocities of the conflict.
Final concessions.
ARMENIA: A. D. 1896.
Attack of Armenian revolutionists on the Ottoman Bank
and subsequent massacre of Armenians in Constantinople.
ASHANTI:
British occupation of the country.
Rising of the tribes.
Siege and relief of Kumassi.
See, in volume 2,
ENGLAND: A. D. 1873-1880)