Professional Documents
Culture Documents
An Experimental Comparison of Web and Telephone Su PDF
An Experimental Comparison of Web and Telephone Su PDF
An Experimental Comparison of Web and Telephone Su PDF
net/publication/237804829
CITATIONS READS
283 521
4 authors, including:
Ting Yan
Westat
44 PUBLICATIONS 2,031 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Roger Tourangeau on 13 August 2014.
SCOTT FRICKER
MIRTA GALESIC
ROGER TOURANGEAU
TING YAN
SCOTT FRICKER is a psychologist at the Bureau of Labor Statistics and a graduate student at the Joint
Program in Survey Methodology, University of Maryland. ROGER TOURANGEAU is a research professor
at the Institute for Social Research, University of Michigan, and director of the Joint Program in
Survey Methodology, University of Maryland. MIRTA GALESIC and TING YAN are graduate students at
the Joint Program in Survey Methodology, University of Maryland. The work reported here was
conducted as part of the Joint Program in Survey Methodology Practicum. We are grateful to Sarah
Dipko, who helped direct the Practicum, to the students in that class, and to the National Science
Foundation for its support of the study. We are especially grateful to Robert Bell and Jeri Mulrow at
the National Science Foundation for their help in designing the study and to Frauke Kreuter and
Carolina Casas-Cordero for their comments on an earlier draft of the paper. Finally, we thank Chin-
tan Turakhia and Dean Williams at Schulman, Ronca, and Bucuvalas, Inc., for their direction of the
study at SRBI. The authors contributed equally to the research and are listed in alphabetical order.
Address correspondence to Roger Tourangeau; e-mail: RTourang@survey.umd.edu.
Introduction
Traditionally, surveys have been carried out using three main methods of data
collection—face-to-face interviews, telephone interviews, and mail question-
naires. Over the last 10 years, the picture has changed sharply, as a number of
new methods of computer administration have emerged and have started to sup-
plant the traditional trio. Among the new modes are audio computer-assisted
self-interviewing (ACASI), which is increasingly popular in surveys that have
traditionally used face-to-face interviewing; interactive voice response (IVR), an
automated method for collecting data by telephone; and Web surveys, which
resemble traditional mail surveys in many ways (as Dillman 2002 has noted). All
three of the new modes combine the power of computer administration (with its
capacity for automated routing, online checks for problematic answers, and so
on) with the advantages of self-administration (such as the elimination of inter-
viewer variance and improved reporting of sensitive information).
Despite their similarities to mail surveys (for example, their reliance on
visual presentation of the questions), Web surveys are often considered a less
costly alternative to telephone interviews (see, for example, Chang and
Krosnick 2003; Schonlau et al. 2003; Taylor 2000). The study presented here
examines the characteristics of a Web survey, comparing it to a telephone sur-
vey. The questions in the survey concerned knowledge of and attitudes toward
science and technology and were administered as part of the National Science
Foundation’s (NSF) periodic efforts to measure public attitudes and under-
standing of science and technology. Although there have been previous com-
parisons between Web and telephone surveys, most of them have not made
their way into print. In addition, earlier comparisons have focused primarily
on attitude questions (Bason 2000; Chang and Krosnick 2003; although see
Schonlau et al. 2003, for an exception). The NSF questionnaire includes a
broader range of items, including questions that assess knowledge of basic
scientific facts. Finally, our comparison focuses on persons with Web access,
contrasting Internet users who were interviewed by telephone with Internet
users who completed the same questionnaire via the Web. This eliminates the
need to adjust for population differences between the telephone and Web
respondents statistically.
Telephone surveys became a popular mode of data collection once coverage
of the population reached acceptable levels and efficient sampling methods
372 Fricker et al.
Measurement Errors
Whatever their problems with various types of nonobservation errors, Web
surveys may have some advantages over telephone surveys in terms of obser-
vation errors, or errors arising from the measurement process. Prior investiga-
tions of mode differences in reporting suggest three broad hypotheses about
why different modes may produce different levels of measurement error (for a
review, see Tourangeau, Rips, and Rasinski 2000, chap. 10). First, different
methods of data collection differ in how much privacy they offer respondents,
and this affects how respondents answer questions about sensitive topics.
Respondents are more likely to report a range of embarrassing behaviors in
self-administered surveys than in interviewer-administered surveys, presum-
ably because self-administration reduces social desirability concerns. Web
surveys may share this advantage with earlier methods of self-administration.
Consistent with this expectation, Chang and Krosnick (2003) found that
respondents in their Web surveys gave fewer socially desirable answers than
respondents in an RDD telephone sample. For example, whites were more
likely to express opposition to government programs to help black Americans.
Second, different methods of data collection may encourage survey “satis-
ficing” to different extents or promote different forms of satisficing (on survey
satisficing, see Krosnick 1991). For example, Web surveys that use grids may
encourage “nondifferentiation,” or giving similar answers to each item in a set
of related questions (Tourangeau, Couper, and Conrad 2004, experiment 6).
By eliminating interviewers and the motivation they provide, Web surveys
may encourage satisficing. Or Web surveys may reduce satisficing since they
let respondents complete the instrument when they want to and at their own
pace. This may lead to higher levels of motivation and lower levels of distrac-
tion than in a comparable telephone survey. The study by Chang and Krosnick
(2003) also lent some indirect support for this hypothesis. They found that
Web surveys produced less satisficing behavior and greater concurrent and
predictive validity than a survey administered by telephone.
A third reason that different modes may produce different levels of mea-
surement error is that they utilize different sensory channels. Web surveys rely
Comparison of Web and Telephone Surveys 375
hypothesis here was that the knowledge items would be easier online, where
the questions are presented visually, than over the telephone. In addition, we
thought that this advantage for the Web version would be greatest with the
most burdensome questions. The second issue was whether completion
times would vary by mode. We expected Web respondents to complete the
questionnaire at a more leisurely rate than the telephone respondents since
they could set their own pace. A final set of analyses looked at various indi-
cators of survey satisficing. In general, we expected to see less satisficing
among the Web respondents, partly because the Web respondents would
take more time. Still, we thought that Web respondents might be more prone
to one form of satisficing—nondifferentation in their answers to related
items, especially when presentation of the items in a grid underscored their
similarity. We also explored two additional issues more briefly—the digital
divide (that is, the differences between respondents with Internet access and
those without) and the impact of nonresponse on comparisons across the
groups.
Method
The 2003 Practicum Survey was a national telephone survey, conducted by
Schulman, Ronca & Bucuvalas, Inc. (SRBI), on behalf of the Joint Program
in Survey Methodology (JPSM) and the National Science Foundation (NSF).
The survey drew questions from the NSF’s Survey of Public Attitudes
Toward and Understanding of Science and Technology (SPA). The NSF has
conducted the SPA periodically since the 1970s to monitor public attitudes
and understanding of science concepts and the scientific process. The
purpose of the Practicum Survey was to gain a better understanding of the
effects of different data collection modes on responses to the survey and to
explore several new measures of scientific knowledge that were being con-
sidered for future NSF surveys. Responses were obtained through telephone
interviews and a self-administered Web questionnaire. SRBI carried out the
data collection from July through September 2003.
S A M P L E D E S IG N
The study used a list-assisted RDD sample of 12,900 numbers, obtained from
Survey Sampling, Inc. (SSI) for the study. The numbers were a systematic
sample from blocks with at least three residential listings, selected with equal
probability across all eligible blocks. (A block consisted of a set of 100 possi-
ble telephone numbers that share their first eight digits. For instance, the num-
bers from 301-314-7900 to 301-314-7999 make up one block.) Once a block
was selected, a randomly generated two-digit number in the range of 00–99 was
appended to the block to form a 10-digit number. The sample was prescreened
Comparison of Web and Telephone Surveys 377
to identify and drop unassigned and business numbers. The target population
for the sample was the adult (18 or older) noninstitutional, nonmilitary popu-
lation of the United States in the contiguous 48 states. The survey made no
provisions for non-English-speaking respondents or respondents using a tele-
phone device for the deaf; these portions of the population are not represented
in the survey.
DATA COLLECTION
1. We carried out extensive analyses that compared the respondents who completed the survey
early on and got the smaller incentives with those who completed it after August 20, 2003, and
got the larger incentives. Within each mode group, we observed no significant differences
between those who got the smaller and larger incentives by sex, race, Hispanic background, age,
or educational attainment. Similarly, when we combined the different mode groups, we found
only one difference in the demographic makeup of the samples who were offered smaller and
larger amounts—those who completed the survey early (having been offered the smaller incen-
tives) tended to have more education than those who completed it later. We also compared the
different incentive groups on science knowledge scores, time taken to complete the survey, and
several measures of data quality. In some 22 analyses, we found only two significant differences
on any of these variables, and the conclusions reported below do not change when the incentive
effects are taken into account. Still, the fact remains that our Web-telephone comparisons actually
compare respondents who got a relatively small incentive (either $5 or $10) and completed the
questions over the telephone with respondents who got a larger incentive ($20 or more) and
completed the questions via the Web. We cannot separate the effects of mode from those of the
level of incentive.
378 Fricker et al.
QUESTIONNAIRES
most part, the attitude items used 5-point response scales.2 The knowledge
questions were a mix of multiple choice, true/false, and open-ended items. We
derived three basic indices based on these questions—support for scientific
research (based on answers to six questions), proscience attitudes (based on
four questions about the role of science in everyday life), and scientific
knowledge (based on responses to 18 items).
Results
Our analysis begins with two preliminary issues. First, we briefly examine
the differences between those with Internet access and those without it. These
comparisons are based on the telephone interviews (so that differences
between the two types of respondents are not confounded by differences in
the mode of data collection). Next, we examine differences in response rates
to the main interview among those cases who reported Internet access.
Despite our offering larger incentives to those who were assigned to com-
plete the survey online, we expected a lower response within that group. We
also compare the demographic makeup of the two mode groups. These
nonresponse analyses are necessary so we can determine whether any report-
ing differences by mode can be traced to differences in the makeup of the mode
groups.
Our main analyses look at how substantive estimates from the survey—
average knowledge scores and attitudes to science and technology—differ for
the two samples of Internet users. We also compare completion times for the
Web and telephone respondents and various indicators of response quality.
Again, we compare the Internet users who completed the survey under the two
modes, to avoid confounding differences in the populations with differences
by mode. As we noted in footnote 1, we find only scattered differences in
demographic characteristics, knowledge scores, completion times, and response
quality between those who completed the survey early in the field period (and
got a relatively small incentive) and those who completed it later (and
received a larger incentive), and we ignore this variable in presenting the
results.
2. In fact, the attitude questions were the subject of two randomized experiments. For approxi-
mately half of the respondents, the items offered only four response options, omitting the middle
response category. The omission of the middle category did not have much effect of the means for
these items, and we ignore this experiment in presenting our results here. In addition, among
respondents interviewed over the telephone, about half received the response options in two sepa-
rate questions. The first assessed the overall direction of the respondent’s view (“Do you agree or
disagree with the statement?”) and the second, the extremity (“Do you strongly agree or some-
what agree?”) (Krosnick and Berent 1993). The remainder received questions in the conventional
way, with all four (or five) response options presented with the question itself. Again, we found
few differences between the two item formats and combine the results here.
380 Fricker et al.
Weighted Unweighted
Web User Not Web User Significance Web User Not Web User Significance
(N = 530) (N = 472) Test (N = 530) (N = 472) Test
Weighted Unweighted
Web User Not Web User Significance Web User Not Web User Significance
(N = 530) (N = 472) Test (N = 530) (N = 472) Test
N Rates
NOTE.—“No contact” includes “no answer,” “foreign language,” “hearing or other health
problems,” “away for duration,” and “call blocking.” The rates for the main questionnaire are
conditional on completion of the screener.
Did this differential rate of nonresponse affect the composition of the two
samples of Internet users? Table 3 shows the demographic makeup of the two
samples and compares both to data from the CPS. (The figures from the
Practicum Survey in table 3 are unweighted; the weights attempt to compen-
sate for the effects of nonresponse and bring the figures for the two samples of
Internet users even closer.) Despite the big differences in the response rates,
we observe no significant differences in the makeup of the two samples of
Internet users by sex, race, Hispanic origin, education, or age. We also com-
pared the two samples on the number of college-level science courses they
reported and observed no difference between the groups.
The first column of CPS figures is for adult (18 and older) Internet users
only and is based on data from the September 2001 CPS, which included
items to identify Internet users; the final column in the table gives 2003 CPS
figures for the entire adult U.S. population. Both Practicum samples give a
reasonable picture of the Web population. The average absolute deviation
between the Web sample percentages and the CPS percentages for Web
respondents is 5.0 percent; the comparable figure for the telephone sample is
4.2 percent. (Because the CPS figures for the Internet users are from Septem-
ber 2001, they may be a bit out of date.) By contrast, neither sample represents
the general U.S. population very well; both samples depart by about 9 percent
on average from the 2003 CPS percentages for the U.S. adult population.
384 Fricker et al.
COMPLETION TIMES
Among the Internet users, we expected those who completed the questionnaire
via the Web to take longer than those who completed the questionnaire over the
telephone; Web respondents could control the pace at which they did the survey,
and we expected them to complete the questions more slowly than those who
85
75
Telephone
Mean Percent Correct
Web
65
55
45
35
True-False Multiple Choice Open
Item Format
1800
1600
Completion Times (Seconds)
1400
1200
Telephone
Web
1000
800
600
18-34 35-44 45-54 55+
Age Group
Figure 2. Mean completion times (in seconds) for Web users, by mode and
age group.
3. All of the results presented in this section are unweighted. In a separate analysis we compared
the completion times for those who reported no Internet access to the telephone respondents with
Internet access, controlling for the main effects of the demographic variables (sex, age, race,
Hispanic origin, education, and number of college science courses); there was no difference in the
overall completion times for the two groups of telephone respondents. The overall mean for the
Web nonusers was 20.8 minutes (versus 20.4 for the Web users who completed the main
questionnaire by telephone).
Comparison of Web and Telephone Surveys 387
another way, the relation between completion times and age seemed far
steeper among the Web users who completed the survey online than among
those who did the interview by telephone. Our analysis uncovered one other
significant interaction effect. On the telephone, male Web users completed the
survey about 27 seconds faster than female Web users, but on the Web, the
females were faster by about 100 seconds on average. We are not sure what to
make of this interaction of mode and sex.
The overall time difference by mode amounted to more than three minutes
on average, but virtually all of this difference is accounted for by two sections
of the questionnaire that included several open-ended knowledge questions.
(In fact, the Web-telephone difference for these two sections is a little more
than four minutes on average.) The Web respondents had to type in their
answers to the open-ended questions, and apparently that was a lot slower
than simply saying them aloud to a telephone interviewer.4 For one section of
the questionnaire that included only closed attitude items, the Web respon-
dents were actually significantly faster than their telephone counterparts
(4.4 minutes on average for the Web respondents versus 5.6 minutes for the
telephone respondents)—F(1,868) = 14.7, p < .001.
RESPONSE QUALITY
4. The open-ended answers were almost equally long in the two modes, averaging 16.4 words for
the Web users who completed the survey by telephone and 16.6 for the Web respondents. This
difference was not statistically significant.
388 Fricker et al.
we looked at, only 14.6 percent of the Web respondents gave DK responses to
any of them; the corresponding figure for the telephone respondents was 52.3
percent. The difference was highly significant in a logit analysis that included
the demographic control variables (χ2 = 139.6, df = 1, p < .001). The Web ver-
sion of the questionnaire did not display DK options, and when respondents left
an item blank, it displayed a confirmatory screen that gave the respondent a sec-
ond chance to enter a response. Telephone interviewers accepted DK responses
when respondents volunteered them without any probing.
There were no differences in the proportion of acquiescent responses between
those who responded to the items on the Web and those who answered by tele-
phone (an average of 62.9 percent of the answers were in the acquiescent direc-
tion for the Web respondents versus 62.3 for the telephone respondents). If we
restrict the analysis to the items that used an agree/disagree format, there was still
no apparent difference between the two samples of Web users in acquiescence.
On all four batteries of items, the Web users who completed the questions
online gave less differentiated responses to the items than the Web users who
responded to the same questions over the telephone. For all four items, these
differences were significant after controlling for the demographic differences
between the two samples. The Web respondents were also significantly more
likely to give identical answers to every item in at least one of the four batter-
ies than the telephone respondents (21.8 percent of the Web respondents gave
“straight line” responses to at least one of the four batteries versus 14.2 per-
cent of the telephone respondents). The difference was significant in a logit
analysis that included the demographic controls (χ2 = 6.56, df = 1, p < .05).
Discussion
Our analyses were organized around five hypotheses. First, we compared Web
users with nonusers, expecting to find the usual differences that constitute the
“digital divide”; that is, we expected that the Web users would be younger,
less racially diverse, and better educated than those without access to the Web.
The results are in line with these expectations. What our results add to the pre-
vious picture is that those with access to the Web expressed more support for
scientific research, reported more positive views about science, and scored
better on the science knowledge items (see table 1).
Second, in comparing the two samples of Web users, we expected higher
response rates to the telephone interview than to the Web survey. In line with
most previous studies, we got a much higher overall response rate in the tele-
phone interview than the Web survey—the response rates for the telephone
samples were nearly double that of the Web sample. Because all the respon-
dents were contacted initially by telephone and were offered incentives for
participating, we got a relatively high response rate even for the Web survey
(the overall response was over 20 percent, taking into account nonresponse to
Comparison of Web and Telephone Surveys 389
5. It is, of course, possible that Web respondents were able to look up some of the answers online
as they completed the survey. There are two reasons we do not think that happened. First, one of
the students in the practicum classified the questions according to how easy it was to look up the
answers online; this classification of the items reflected the experiences of several of his class-
mates, who tried to look up the answers. He found that the difference between Web and telephone
respondents was similar for the two groups of items; the Web respondents did just as much better
than the telephone respondents with the items for which it was difficult to look up the answers as
with those for which it was easy to find the answers online. Second, the Web-telephone difference
was especially marked for the open-ended items, which did not test simple facts that could have
been easily verified online.
390 Fricker et al.
mode effects and suggests that the combination of a slower pace and
visual administration of the questions offered by the Web survey may
have the most impact on respondents with less cognitive capacity. Younger
respondents may also have an advantage in general computer literacy and
typing skills that make the Web a more suitable mode of data collection
for them.
Our final hypothesis concerned several indirect indicators of data quality,
including item nonresponse, acquiescence, and nondifferentiation. The Web
users who completed the questions online were much less likely to leave items
blank than their counterparts who completed the survey over the telephone;
unlike the telephone interviewers, who accepted DK responses when respon-
dents volunteered them, the online survey prodded respondents who left the
attitude items blanks. On the other hand, Web respondents also gave less dif-
ferentiated responses to batteries of questions that offered the same set of
response options. The two samples did not differ in their tendency to agree
with attitude statements.6 Together these findings offer only limited support to
the notion that Web data collection increases data quality relative to the tele-
phone (Chang and Krosnick 2003).
Any mode comparison study necessarily examines a blend of factors that
include both intrinsic features of the modes being compared, more incidental
features that reflect the design choices involved in implementing each mode,
and features that are essentially unrelated to the modes of data collection
themselves but that nonetheless covary with them. In our study some aspects
of the contrast between the telephone and Web versions of the questionnaire
reflect inherent properties of the two modes. For instance, the telephone
respondents heard the questions and responded orally, whereas the Web
respondents read the questions and responded via the keyboard. These seem to
us to be fundamental features of the two modes. By contrast, the difference in
the treatment of “no opinion” responses reflected an incidental design deci-
sion on our part rather than an intrinsic feature of either mode. The Web ver-
sion of the questionnaire, in effect, prompted respondents for an answer
whenever they left a question blank, whereas the telephone interviewers
accepted “no opinion” responses without probing them. It was probably this
design decision rather than any inherent difference between the modes that
accounted for the higher rate of “no opinion” responses in the telephone con-
dition. Finally, we tried to equalize the response rates by mode group by offer-
ing a larger incentive to the cases assigned to complete the questions online.
There was still a substantial difference in response rates (97.5 percent of the
References
American Association for Public Opinion Research (AAPOR). 2000. Standard Definitions: Final
Disposition of Case Codes and Outcome Rates for Surveys. Lenexa, KS: AAPOR.
Bason, James J. 2000. “Comparison of Telephone, Mail, Web, and IVR Surveys.” Paper presented
at the annual meeting of the American Association for Public Opinion Research, Portland, OR.
Batagelj, Zenel, Katja Lozar Manfreda, Vasja Vehovar, and Metka Zaletel. 2000. “Cost and
Errors of Web Surveys in Establishment Surveys.” Paper presented at the International Confer-
ence on Establishiment Surveys II, Buffalo, NY.
Best, Samuel J., Brian Krueger, Clark Hubbard, and Andrew Smith. 2001. “An Assessment of the
Generalizability of Internet Surveys.” Social Science Computer Review 19: 131–45.
Chang, LinChiat, and Jon Krosnick. 2003. “National Surveys via RDD Telephone Interviewing
vs. the Internet: Comparing Sample Representativeness and Response Quality.” Manuscript
under review.
Cook, Colleen, Fred Heath, and Russel L. Thompson. 2000. “A Meta-Analysis of Response Rates
in Web- or Internet-Based Surveys.” Educational and Psychological Measurement 60: 821–36.
392 Fricker et al.
Couper, Mick P. 2000. “Web Surveys: A Review of Issues and Approaches.” Public Opinion
Quarterly 64: 464–94.
———. 2002. “New Technologies and Survey Data Collection: Challenges and Opportuni-
ties.” Paper presented at the International Conference on Improving Surveys, Copenhagen,
Denmark.
Crawford, Scott, Sean McCabe, Mick P. Couper, and Carol Boyd. 2002. “From Mail to Web:
Improving Response Rates and Data Collection Efficiencies.” Paper presented at the Inter-
national Conference on Improving Surveys, Copenhagen, Denmark.
Dillman, Don A. 2002. “Navigating the Rapids of Change: Some Observations on Survey Meth-
odology in the Early 21st Century.” Draft of presidential address presented at the annual meeting
of the American Association for Public Opinion Research, St. Petersburg, FL.
Groves, Robert M. 1989. Survey Errors and Survey Costs. New York: John Wiley and Sons.
Gunn, Holly. 2002. “Web-Based Surveys: Changing the Survey Process.” First Monday 7(2).
http://www.firstmonday.dk/issues/issue7_12/gunn/ (accessed April 10, 2004).
Krosnick, Jon A. 1991. “Response Strategies for Coping with the Cognitive Demands of Attitude
Measures in Surveys.” Applied Cognitive Psychology 5: 213–36.
Krosnick, Jon A., and Matthew Berent. 1993. “Comparisons of Party Identification and Policy
Preferences: The Impact of Survey Question Format.” American Journal of Political Science
37: 941–64.
Manfreda, Katja, and Vasja Vehovar. 2002. “Survey Design Features Influencing Response Rates
in Web Surveys.” Paper presented at International Conference on Improving Surveys, Copen-
hagen, Denmark.
Miller, Jon D., and Pardo, R. 2000. “Civic Scientific Literacy and Attitudes to Science and
Technology: A Comparative Analysis of the European Union, the United States, Japan, and
Canada.” In Between Understanding and Trust: The Public, Science, and Technology,
ed. Meinolf Dierkes and Claudia Von Grote, pp. 81–129. Amsterdam: Harwood Academic
Publishers.
National Telecommunications and Information Administration (NTIA). 2000. Falling Through
the Net: Toward Digital Inclusion.Washington, DC: NTIA. Available online at http://
www.ntia.doc.gov/ntiahome/fttn00/contents00.html (accessed April 15, 2004).
———. 2002. A Nation Online: How Americans Are Expanding Their Use of the Internet. Wash-
ington, DC: Available online at http://www.ntia.doc.gov/ntiahome/dn/ (accessed April 15,
2004).
Schonlau, Matthias, Beth J. Asch, and Can Du. 2003. “Web Survey as Part of a Mixed-Mode
Strategy for Populations That Cannot Be Contacted by E-Mail.” Social Science Computer
Review 21: 218–22.
Schonlau, Matthias, Kinga Zapert, Lisa P. Simon, Katharine Sanstad, Sue Marcus, John Adams,
Mark Spranca, Hongjun Kan, Rachel Turner, and Sandra Berry. 2003. “A Comparison between
Responses from a Propensity-Weighted Web Survey and an Identical RDD Survey.” Social
Science Computer Review 21: 1–11.
Taylor, Humphrey. 2000. “Does Internet Research Work?” International Journal of Market
Research 42: 51–63.
Thornberry, Owen T., and James T. Massey. 1988. “Trends in United States Telephone Coverage
across Time and Subgroups.” In Telephone Survey Methodology, ed. Robert M. Groves, Paul P.
Biemer, Lars E. Lyberg, James T. Massey, William L. Nicholls II, and Joseph Waksberg, pp. 25–49.
New York: John Wiley and Sons.
Tourangeau, Roger. 2004. “Survey Research and Societal Change.” Annual Review of Psychology
55: 775–801.
Tourangeau, Roger, Mick P. Couper, and Frederick G. Conrad. 2004. “Spacing, Position, and
Order: Interpretive Heuristics for Visual Features of Survey Questions.” Public Opinion Quar-
terly 68: 368–93.
Tourangeau, Roger, Lance J. Rips, and Kenneth Rasinski. 2000. The Psychology of Survey
Response. Cambridge: Cambridge University Press.
Waksberg, Joseph. 1978. “Sampling Methods for Random Digit Dialing.” Journal of the American
Statistical Association 73: 40–46.
Wiebe, Elizabeth F., Joe Eyerman, and John D. Loft. 2001. “Evaluating Nonresponse in a Web-
Enabled Survey on Health and Aging.” Paper presented at the annual meeting of the American
Association for Public Opinion Research, Montreal.