An Experimental Comparison of Web and Telephone Su PDF

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 24

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/237804829

An Experimental Comparison of Web and Telephone


Surveys

Article  in  Public Opinion Quarterly · September 2005


DOI: 10.1093/poq/nfi027

CITATIONS READS

283 521

4 authors, including:

Mirta Galesic Roger Tourangeau


Santa Fe Institute Westat
95 PUBLICATIONS   3,303 CITATIONS    165 PUBLICATIONS   11,887 CITATIONS   

SEE PROFILE SEE PROFILE

Ting Yan
Westat
44 PUBLICATIONS   2,031 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

University of Michigan Energy Survey View project

American National Election Study View project

All content following this page was uploaded by Roger Tourangeau on 13 August 2014.

The user has requested enhancement of the downloaded file.


Public Opinion Quarterly, Vol. 69, No. 3, Fall 2005, pp. 370–392

AN EXPERIMENTAL COMPARISON OF WEB


AND TELEPHONE SURVEYS

SCOTT FRICKER
MIRTA GALESIC
ROGER TOURANGEAU
TING YAN

Abstract We carried out an experiment that compared telephone


and Web versions of a questionnaire that assessed attitudes toward sci-
ence and knowledge of basic scientific facts. Members of a random digit
dial (RDD) sample were initially contacted by telephone and answered a
few screening questions, including one that asked whether they had
Internet access. Those with Internet access were randomly assigned to
complete either a Web version of the questionnaire or a computer-
assisted telephone interview. There were four main findings. First,
although we offered cases assigned to the Web survey a larger incen-
tive, fewer of them completed the online questionnaire; almost all those
who were assigned to the telephone condition completed the interview.
The two samples of Web users nonetheless had similar demographic
characteristics. Second, the Web survey produced less item nonresponse
than the telephone survey. The Web questionnaire prompted respon-
dents when they left an item blank, whereas the telephone interviewers
accepted “no opinion” answers without probing them. Third, Web
respondents gave less differentiated answers to batteries of attitude items
than their telephone counterparts. The Web questionnaire presented
these items in a grid that may have made their similarity more salient.

SCOTT FRICKER is a psychologist at the Bureau of Labor Statistics and a graduate student at the Joint
Program in Survey Methodology, University of Maryland. ROGER TOURANGEAU is a research professor
at the Institute for Social Research, University of Michigan, and director of the Joint Program in
Survey Methodology, University of Maryland. MIRTA GALESIC and TING YAN are graduate students at
the Joint Program in Survey Methodology, University of Maryland. The work reported here was
conducted as part of the Joint Program in Survey Methodology Practicum. We are grateful to Sarah
Dipko, who helped direct the Practicum, to the students in that class, and to the National Science
Foundation for its support of the study. We are especially grateful to Robert Bell and Jeri Mulrow at
the National Science Foundation for their help in designing the study and to Frauke Kreuter and
Carolina Casas-Cordero for their comments on an earlier draft of the paper. Finally, we thank Chin-
tan Turakhia and Dean Williams at Schulman, Ronca, and Bucuvalas, Inc., for their direction of the
study at SRBI. The authors contributed equally to the research and are listed in alphabetical order.
Address correspondence to Roger Tourangeau; e-mail: RTourang@survey.umd.edu.

doi:10.1093 / poq / nfi027


© The Author 2005. Published by Oxford University Press on behalf of the American Association for Public Opinion Research.
All rights reserved. For permissions, please e-mail: journals.permissions@oupjournals.org.
Comparison of Web and Telephone Surveys 371

Finally, Web respondents took longer to complete the knowledge items,


particularly those requiring open-ended answers, than the telephone
respondents, and Web respondents answered a higher percentage of
them correctly. These differences between Web and telephone surveys
probably reflect both inherent differences between the two modes and
incidental features of our implementation of the survey. The mode dif-
ferences also vary by item type and by respondent age.

Introduction
Traditionally, surveys have been carried out using three main methods of data
collection—face-to-face interviews, telephone interviews, and mail question-
naires. Over the last 10 years, the picture has changed sharply, as a number of
new methods of computer administration have emerged and have started to sup-
plant the traditional trio. Among the new modes are audio computer-assisted
self-interviewing (ACASI), which is increasingly popular in surveys that have
traditionally used face-to-face interviewing; interactive voice response (IVR), an
automated method for collecting data by telephone; and Web surveys, which
resemble traditional mail surveys in many ways (as Dillman 2002 has noted). All
three of the new modes combine the power of computer administration (with its
capacity for automated routing, online checks for problematic answers, and so
on) with the advantages of self-administration (such as the elimination of inter-
viewer variance and improved reporting of sensitive information).
Despite their similarities to mail surveys (for example, their reliance on
visual presentation of the questions), Web surveys are often considered a less
costly alternative to telephone interviews (see, for example, Chang and
Krosnick 2003; Schonlau et al. 2003; Taylor 2000). The study presented here
examines the characteristics of a Web survey, comparing it to a telephone sur-
vey. The questions in the survey concerned knowledge of and attitudes toward
science and technology and were administered as part of the National Science
Foundation’s (NSF) periodic efforts to measure public attitudes and under-
standing of science and technology. Although there have been previous com-
parisons between Web and telephone surveys, most of them have not made
their way into print. In addition, earlier comparisons have focused primarily
on attitude questions (Bason 2000; Chang and Krosnick 2003; although see
Schonlau et al. 2003, for an exception). The NSF questionnaire includes a
broader range of items, including questions that assess knowledge of basic
scientific facts. Finally, our comparison focuses on persons with Web access,
contrasting Internet users who were interviewed by telephone with Internet
users who completed the same questionnaire via the Web. This eliminates the
need to adjust for population differences between the telephone and Web
respondents statistically.
Telephone surveys became a popular mode of data collection once coverage
of the population reached acceptable levels and efficient sampling methods
372 Fricker et al.

were developed (Waksberg 1978). Recent social and technological develop-


ments are now making it more difficult and costly to conduct telephone sur-
veys (for a review of these trends, see Tourangeau 2004). These include
devices that make it harder to reach people by telephone (such as answering
machines, caller ID, and call blocking), which lower response rates and
drive up data collection costs. Thus, the emergence and rapid diffusion of
the Internet have made it an attractive alternative to telephone administra-
tion, and the Web is becoming an increasingly popular vehicle for surveys.
Web surveys are appealing alternatives to telephone surveys partly because
they do not require interviewers and thus offer the potential for dramatic
cost savings. In addition, the Web would seem to offer attractive measure-
ment properties. Like ACASI and IVR, Web surveys combine the benefits
of automation with those of self-administration; further, Web surveys may
have some unique advantages, such as the use of interactive help and com-
plex audio and video displays. All in all, it is easy to see why more organiza-
tions are using the Web to conduct surveys. Still, the error properties of Web
surveys are not well understood. Our study attempts to look at number of
potential sources of error in a Web survey, but it mainly focuses on mea-
surement. We begin by briefly reviewing the potential sources of error that
may affect estimates from Web surveys.

Nonobservation Errors in Web Surveys


One particularly troubling set of problems involves what Groves (1989) refers
to as “nonobservation” errors, or errors that arise from taking observations only
from a portion of the population. Web surveys have potentially big problems
from all three major sources of nonobservation error—coverage, sampling,
and nonresponse.
Compared to telephone or area probability samples, Web surveys offer
relatively poor coverage of the general household population. The most
recent published estimates suggest that Internet users still constitute only
about 55 percent of the U.S. population (National Telecommunications and
Information Administration [NTIA] 2002). By contrast, telephone coverage
is 94 percent, according to the March 2001 Current Population Survey
(CPS). And, although household Internet access is growing rapidly, a “digi-
tal divide” remains. Persons who do not have Internet access are more likely
to be black or Hispanic, poor, poorly educated, elderly, or disabled than per-
sons with Internet access (NTIA 2000). Current trends in the penetration of
computer and Internet use in U.S. households may reduce coverage prob-
lems in the future, but for now the potential for large coverage bias still
exists.
Compounding this problem is the absence of a good frame for selecting
samples of Internet users. Except in special populations (like employees at
Comparison of Web and Telephone Surveys 373

specific companies or students at a university), there are no comprehensive


lists of Internet users and no methods for sampling Internet users analogous to
the methods used to generate random digit dial (RDD) samples (Best et al.
2001; Couper 2002; Gunn 2002). For many researchers, the inability to select
probability samples of Internet users eliminates Web surveys from serious
consideration as a stand-alone method of data collection. One solution to this
problem has been using some other method for generating a probability sam-
ple (such as RDD) and then trying to persuade the members of the sample to
complete the questions online. That is the strategy employed here.
Another potential source of error in Web surveys is unit nonresponse.
Assessing the rate and impact of nonresponse in Web surveys can be difficult;
indeed, when the respondents are self-selected volunteers (as when recruit-
ment is done through banner links), it is impossible to calculate a response
rate—there is no known number of sample members to serve as the base
(Wiebe, Eyerman, and Loft 2001). When response rates can be calculated, as
in Web surveys with e-mail invitations, Web surveys generally report fairly
low response rates (Bategelj et al. 2000; Couper 2000), though this depends
on the frame and population of interest. For some populations, Web surveys can
attain lower nonresponse rates than other data collection methods (Crawford
et al. 2002). Still, the overall response rates for Web surveys typically reflect
outcomes at several stages in the data collection process (contacting sample
members, generally by e-mail; getting them to access the survey online; and
persuading them to complete the questionnaire once they have started it), and
poor outcomes at any of the stages will lead to low response rates. There is
considerable variability across Web surveys in response outcomes across the
various stages. For example, in an analysis of response rates from over 100
Web surveys, Manfreda and Vehovar (2002) found that the percentage of
undeliverable e-mail invitations ranged from 1 to 20 percent, depending on
the population. The percentage of respondents who received the invitations
but failed to access the Web questionnaires was also highly variable—ranging
from 1 to 96 percent. Manfreda and Vehovar (2002) also point out various
design features that can boost response rates in Web surveys, such as prenoti-
fication and incentives (see also Cook, Heath, and Thompson 2000, and
Schonlau, Asch, and Du 2003).
Despite these difficulties, researchers have attempted to obtain samples to
represent the general U.S. population via Web surveys, relying on two main
approaches. The first involves recruiting large panels of respondents with
known demographics, selecting samples of these panel members for specific
surveys, and then weighting each sample to external population figures or
using more sophisticated statistical corrections such as propensity scoring (see,
for example, Taylor 2000). The second approach is to employ Web-enabled
panels: respondents are recruited via a telephone survey, and recruits are pro-
vided with the necessary equipment to access the Internet. Chang and Krosnick
(2003) compared samples selected using these two approaches (the Harris
374 Fricker et al.

Interactive and Knowledge Networks panels, respectively) against a national


RDD survey and the March CPS sample. They found that the demographics of
both the Knowledge Networks and RDD samples generally deviated from
CPS figures only modestly (with an average unweighted deviation across 22
demographic categories of 4.0 percent for the RDD survey and 4.3 percent for
the Knowledge Networks sample) with a larger difference (8.7 percent on
average) for the Harris Interactive sample. In addition, respondents in both
Web samples tended to have more education, higher incomes, and to be
slightly older than the general population.

Measurement Errors
Whatever their problems with various types of nonobservation errors, Web
surveys may have some advantages over telephone surveys in terms of obser-
vation errors, or errors arising from the measurement process. Prior investiga-
tions of mode differences in reporting suggest three broad hypotheses about
why different modes may produce different levels of measurement error (for a
review, see Tourangeau, Rips, and Rasinski 2000, chap. 10). First, different
methods of data collection differ in how much privacy they offer respondents,
and this affects how respondents answer questions about sensitive topics.
Respondents are more likely to report a range of embarrassing behaviors in
self-administered surveys than in interviewer-administered surveys, presum-
ably because self-administration reduces social desirability concerns. Web
surveys may share this advantage with earlier methods of self-administration.
Consistent with this expectation, Chang and Krosnick (2003) found that
respondents in their Web surveys gave fewer socially desirable answers than
respondents in an RDD telephone sample. For example, whites were more
likely to express opposition to government programs to help black Americans.
Second, different methods of data collection may encourage survey “satis-
ficing” to different extents or promote different forms of satisficing (on survey
satisficing, see Krosnick 1991). For example, Web surveys that use grids may
encourage “nondifferentiation,” or giving similar answers to each item in a set
of related questions (Tourangeau, Couper, and Conrad 2004, experiment 6).
By eliminating interviewers and the motivation they provide, Web surveys
may encourage satisficing. Or Web surveys may reduce satisficing since they
let respondents complete the instrument when they want to and at their own
pace. This may lead to higher levels of motivation and lower levels of distrac-
tion than in a comparable telephone survey. The study by Chang and Krosnick
(2003) also lent some indirect support for this hypothesis. They found that
Web surveys produced less satisficing behavior and greater concurrent and
predictive validity than a survey administered by telephone.
A third reason that different modes may produce different levels of mea-
surement error is that they utilize different sensory channels. Web surveys rely
Comparison of Web and Telephone Surveys 375

mainly on visual presentation of the questions, whereas telephone surveys rely


on aural presentation. Web respondents can easily reread the questions, reduc-
ing the burden on their working memories, and this reduced cognitive burden
may improve their answers. This difference between modes may be particu-
larly important when the questions are difficult or require respondents to for-
mulate complex answers. As a result, our survey, which included a number of
open-ended knowledge items, may show especially large differences between
telephone and Web surveys.

The Present Study


Our objective in this study was to compare a Web survey with a telephone sur-
vey, examining the substantive results from the two surveys, the response rates,
and various indicators of response quality. Most prior Web-telephone compar-
isons have focused on attitude items (Bason 2000; Chang and Krosnick 2003;
Taylor 2000); ours examines both knowledge and attitude items (see also
Schonlau et al. 2003). We tested hypotheses about each type of item based on
the notions that the Web and telephone might differ in how much they encour-
aged respondents to satisfice and in the cognitive burdens they imposed. The
survey initially contacted respondents in an RDD sample by telephone and
determined whether they had access to the Internet. Among those who did,
some were randomly assigned to complete the main questionnaire by tele-
phone, and the rest were assigned to complete the survey on the Web. The
questionnaire included items to assess knowledge of basic scientific facts and
processes; in addition, a number of the questions measured attitudes on issues
related to science and technology.
We make several types of comparisons between the two survey modes. First,
we compare the response rates and demographic characteristics within each
sample. These comparisons help us examine the effects of nonresponse by
mode. We also compare the demographic characteristics of the samples to the
demographic characteristics of the U.S. household population. Second, we
compare substantive estimates from the survey across the samples. In making
these substantive comparisons, we drop the telephone respondents who reported
no access to the Web. Thus, we compare estimates from Web users who were
interviewed by telephone to estimates from Web users who completed the
questionnaire via the Web. Of particular importance are comparisons of
answers to the knowledge questions. Third, we examine various indicators of
response quality across the two samples (again, restricting our attention to Web
users who completed the survey under the two modes). We look at completion
times and several measures of survey satisficing (such as selecting similar
answers to every question in a sequence of related questions).
Our analyses examined three main questions. The first was whether know-
ledge scores differed systematically by mode and by item type. Our main
376 Fricker et al.

hypothesis here was that the knowledge items would be easier online, where
the questions are presented visually, than over the telephone. In addition, we
thought that this advantage for the Web version would be greatest with the
most burdensome questions. The second issue was whether completion
times would vary by mode. We expected Web respondents to complete the
questionnaire at a more leisurely rate than the telephone respondents since
they could set their own pace. A final set of analyses looked at various indi-
cators of survey satisficing. In general, we expected to see less satisficing
among the Web respondents, partly because the Web respondents would
take more time. Still, we thought that Web respondents might be more prone
to one form of satisficing—nondifferentation in their answers to related
items, especially when presentation of the items in a grid underscored their
similarity. We also explored two additional issues more briefly—the digital
divide (that is, the differences between respondents with Internet access and
those without) and the impact of nonresponse on comparisons across the
groups.

Method
The 2003 Practicum Survey was a national telephone survey, conducted by
Schulman, Ronca & Bucuvalas, Inc. (SRBI), on behalf of the Joint Program
in Survey Methodology (JPSM) and the National Science Foundation (NSF).
The survey drew questions from the NSF’s Survey of Public Attitudes
Toward and Understanding of Science and Technology (SPA). The NSF has
conducted the SPA periodically since the 1970s to monitor public attitudes
and understanding of science concepts and the scientific process. The
purpose of the Practicum Survey was to gain a better understanding of the
effects of different data collection modes on responses to the survey and to
explore several new measures of scientific knowledge that were being con-
sidered for future NSF surveys. Responses were obtained through telephone
interviews and a self-administered Web questionnaire. SRBI carried out the
data collection from July through September 2003.

S A M P L E D E S IG N

The study used a list-assisted RDD sample of 12,900 numbers, obtained from
Survey Sampling, Inc. (SSI) for the study. The numbers were a systematic
sample from blocks with at least three residential listings, selected with equal
probability across all eligible blocks. (A block consisted of a set of 100 possi-
ble telephone numbers that share their first eight digits. For instance, the num-
bers from 301-314-7900 to 301-314-7999 make up one block.) Once a block
was selected, a randomly generated two-digit number in the range of 00–99 was
appended to the block to form a 10-digit number. The sample was prescreened
Comparison of Web and Telephone Surveys 377

to identify and drop unassigned and business numbers. The target population
for the sample was the adult (18 or older) noninstitutional, nonmilitary popu-
lation of the United States in the contiguous 48 states. The survey made no
provisions for non-English-speaking respondents or respondents using a tele-
phone device for the deaf; these portions of the population are not represented
in the survey.

DATA COLLECTION

SRBI mailed advance letters with a $5 cash incentive to households for


which addresses were available. Five-dollar incentives were also mailed to
households for which addresses were not available following completion
of the main interview in the telephone condition. In an effort to achieve
roughly equal response rates across mode groups, SRBI offered additional
incentives to Internet respondents, consisting of a postinterview payment of
$20 for those who had received the advance cash incentive and $25 for
those who had not. The incentive was higher for the Internet cases because
we expected a lower response rate among those assigned to Web data
collection than among those assigned to complete the questions over the
telephone. We had no strong empirical basis for choosing these specific
amounts, and about midway through the field period (August 20, 2003), we
increased the incentives in attempt to boost the response rates. An addi-
tional $5 was offered to all telephone survey participants (for a $10 total
incentive for the screening and main interviews), and an additional $20 was
offered to the Internet survey participants (for a $40 total main interview
incentive).1
SRBI interviewers attempted to complete a brief screening interview
with an adult member of each sample household; the screener respondent
was selected via the last birthday method and was asked whether he or she
had access to the Internet at home or at work. Interviewers made up to

1. We carried out extensive analyses that compared the respondents who completed the survey
early on and got the smaller incentives with those who completed it after August 20, 2003, and
got the larger incentives. Within each mode group, we observed no significant differences
between those who got the smaller and larger incentives by sex, race, Hispanic background, age,
or educational attainment. Similarly, when we combined the different mode groups, we found
only one difference in the demographic makeup of the samples who were offered smaller and
larger amounts—those who completed the survey early (having been offered the smaller incen-
tives) tended to have more education than those who completed it later. We also compared the
different incentive groups on science knowledge scores, time taken to complete the survey, and
several measures of data quality. In some 22 analyses, we found only two significant differences
on any of these variables, and the conclusions reported below do not change when the incentive
effects are taken into account. Still, the fact remains that our Web-telephone comparisons actually
compare respondents who got a relatively small incentive (either $5 or $10) and completed the
questions over the telephone with respondents who got a larger incentive ($20 or more) and
completed the questions via the Web. We cannot separate the effects of mode from those of the
level of incentive.
378 Fricker et al.

20 calls to reach sample households. Screener respondents who reported no


Internet access were subsampled; half were asked to complete the main
interview by telephone, and the remainder were dropped from the sample.
(After August 20, 2003, we retained all the cases without Internet access
and attempted to complete telephone interviews with them.) Persons with
Internet access were randomly assigned to complete the main interview by
telephone or via the Web. Initially, 60 percent of those with Internet access
were assigned to the Web condition, and 40 percent were assigned to the
telephone condition. After the mid-course adjustment, 75 percent of the
Internet access cases were assigned to the Web condition and the remainder
to telephone.
At the end of the screening interview, adults assigned to the telephone
condition were immediately administered the telephone instrument. Adults
assigned to the Web survey were asked for an e-mail address and sent an
e-mail solicitation with a link to the survey Web site and a unique respondent
ID. Individuals who refused to provide an e-mail address were asked for a
regular mailing address instead. Cases who failed to provide either type of
address were treated as refusals. Follow-up contacts for nonrespondents in the
Internet condition (those who agreed in the screening interview to complete
the Internet survey) were made first by e-mail and then by telephone. The con-
tent of the telephone and Internet surveys was identical.
In total, 2,352 respondents completed the screening questions, and
1,548 completed the main interview; 267 cases were subsampled out after
completing the screener. The final (unweighted) response rate for the
screener was 42.5 percent (AAPOR 2000, response rate 3). The main sur-
vey response rate was 74.2 percent (conditional on completion of the
screener) and 31.5 percent overall. There were 1,002 respondents who
completed the main questionnaire by telephone (including 530 with Inter-
net access and 472 without Internet access); another 546 did the main
questionnaire on the Web.

QUESTIONNAIRES

The questionnaires included a range of items to assess support for different


areas for scientific research (for example, “Research to develop genetically
modified food that costs less to grow” and “Research with stem cells derived
from human embryos”), attitudes toward the impact of science on everyday
life (“We depend too much on science and not enough on faith” and “Science
makes our way of life change too fast”), and knowledge of basic scientific
facts (“Does the Earth go around the Sun, or does the Sun go around the
Earth?”) and the process of scientific investigation (“Do you have a clear
understanding of what a scientific study is?”). Most of the items were taken
from NSF’s 2001 Survey of Public Attitudes and were originally developed
by Miller and his colleagues (for example, Miller and Pardo 2000). For the
Comparison of Web and Telephone Surveys 379

most part, the attitude items used 5-point response scales.2 The knowledge
questions were a mix of multiple choice, true/false, and open-ended items. We
derived three basic indices based on these questions—support for scientific
research (based on answers to six questions), proscience attitudes (based on
four questions about the role of science in everyday life), and scientific
knowledge (based on responses to 18 items).

Results
Our analysis begins with two preliminary issues. First, we briefly examine
the differences between those with Internet access and those without it. These
comparisons are based on the telephone interviews (so that differences
between the two types of respondents are not confounded by differences in
the mode of data collection). Next, we examine differences in response rates
to the main interview among those cases who reported Internet access.
Despite our offering larger incentives to those who were assigned to com-
plete the survey online, we expected a lower response within that group. We
also compare the demographic makeup of the two mode groups. These
nonresponse analyses are necessary so we can determine whether any report-
ing differences by mode can be traced to differences in the makeup of the mode
groups.
Our main analyses look at how substantive estimates from the survey—
average knowledge scores and attitudes to science and technology—differ for
the two samples of Internet users. We also compare completion times for the
Web and telephone respondents and various indicators of response quality.
Again, we compare the Internet users who completed the survey under the two
modes, to avoid confounding differences in the populations with differences
by mode. As we noted in footnote 1, we find only scattered differences in
demographic characteristics, knowledge scores, completion times, and response
quality between those who completed the survey early in the field period (and
got a relatively small incentive) and those who completed it later (and
received a larger incentive), and we ignore this variable in presenting the
results.

2. In fact, the attitude questions were the subject of two randomized experiments. For approxi-
mately half of the respondents, the items offered only four response options, omitting the middle
response category. The omission of the middle category did not have much effect of the means for
these items, and we ignore this experiment in presenting our results here. In addition, among
respondents interviewed over the telephone, about half received the response options in two sepa-
rate questions. The first assessed the overall direction of the respondent’s view (“Do you agree or
disagree with the statement?”) and the second, the extremity (“Do you strongly agree or some-
what agree?”) (Krosnick and Berent 1993). The remainder received questions in the conventional
way, with all four (or five) response options presented with the question itself. Again, we found
few differences between the two item formats and combine the results here.
380 Fricker et al.

THE DIGITAL DIVIDE

Among the telephone respondents, we can compare those with Internet


access to those without it; the differences between the two groups reflect the
“digital divide,” at least within the select group who completed the screen-
ing interview. Table 1 shows the main results from these comparisons. The
top four panels examine demographic characteristics of Internet users and
nonusers; the bottom four panels compare the same two groups on variables
related to science. The figures on the left side of the table are weighted.
(The weights compensate for the initial selection probability for the tele-
phone number, the subsampling of persons and households, and non-
response to the screener and main interview; the weights are also adjusted
to agree with 2003 CPS figures by sex, age, education, and Internet use. We
computed significance tests based on the weighted data using SUDAAN,
which takes into account the impact of the weights on the variability of the
estimates.)
Our results echo the usual findings regarding the digital divide (NTIA
2000, 2002). Internet users are more educated and younger than nonusers;
they are also somewhat more likely to be white (though this difference is only
marginally significant in the weighted analysis; p < .11). There are large dif-
ferences between users and nonusers related to science knowledge and atti-
tudes: Internet users are more likely to report having taken one or more
college science courses (48.9 percent versus 17.0 percent); they did much bet-
ter on average on the science knowledge items (65.7 percent correct versus
47.4 percent correct for the nonusers); and they were more supportive of sci-
entific research and more positive about the impact of science on everyday life
than nonusers (means of 3.74 and 3.15 on our two attitude scales for the users
versus 3.52 and 2.90 for the nonusers; both p < .001).

THE IMPACT OF NONRESPONSE

A total of 1,548 respondents completed the main questionnaire. As table 2


shows, there are large differences among our three sample groups in
response rates to the main questionnaire. Of the two Internet user samples,
a much lower percentage of those who were assigned to Web data collec-
tion completed the main questionnaire (51.6 percent) than those who were
assigned to the telephone interview (97.5 percent). The sample of persons
who reported no access to the Internet completed the telephone interview at
almost the same rate (98.7 percent) as those with Internet access. The dif-
ference in response rates (conditional on completion of the screener)
between the two telephone samples was not significant; by contrast, the dif-
ference between the two groups assigned to telephone interviews and the
group assigned to Web data collection is highly significant (χ2 = 591.6, df = 1,
p < .001).
Table 1. Selected Characteristics of Web Users and Web Non-Users, Telephone Respondents Only

Weighted Unweighted

Web User Not Web User Significance Web User Not Web User Significance
(N = 530) (N = 472) Test (N = 530) (N = 472) Test

Male 48.0% 47.9% χ2 = 0.0 40.6% 34.0% χ2 = 4.63*


Female 52.0 52.1 (df = 1) 59.4 66.0 (df = 1)
White 87.6 84.2 χ2 = 4.54 89.2 85.3 χ2 = 7.21*
Black 7.3 12.4 (df = 2) 6.6 11.4 (df = 2)
Other 5.1 3.5 4.1 3.1
Hispanic 7.8 12.0 χ2 = 1.72 4.9 6.3 χ2 = 0.83
Not Hispanic 92.2 88.0 (df = 1) 95.1 93.7 (df = 1)
High school or less 30.8 70.5 χ2 = 127.1*** 18.8 57.6 χ2 = 192.4***
Some college 32.4 20.2 (df = 2) 33.0 28.3 (df = 2)
College graduate 36.8 9.3 48.2 14.0
18–34 35.4 24.7 χ2 = 82.8*** 27.1 14.8 χ2 = 139.4***
35–44 27.8 15.4 (df = 3) 25.0 10.8 (df = 3)
45–54 18.4 11.6 22.4 11.8
55 and older 18.4 48.4 25.6 62.6
Table 1. (Continued)

Weighted Unweighted

Web User Not Web User Significance Web User Not Web User Significance
(N = 530) (N = 472) Test (N = 530) (N = 472) Test

0 college science courses 51.1 83.0 χ2 = 88.3*** 42.6 78.3 χ2 = 129.3***


1 college science course 8.8 3.1 (df = 4) 9.8 4.8 (df = 4)
2 college science courses 10.2 4.4 12.3 5.3
3–4 college science courses 11.2 3.0 13.7 4.8
5+ college science courses 18.6 6.5 21.7 6.8
Mean knowledge scores 65.7 47.4 t = 10.6*** 67.7 45.9 t = 17.2***
Mean attitudes toward research 3.74 3.52 t = 3.67*** 3.78 3.49 t = 6.38***
Mean attitudes toward science 3.15 2.90 t = 4.13*** 3.24 2.90 t = 6.78***
NOTE.—Mean knowledge scores are average percent correct out of 18 items. Attitudes toward research and science are 5-point scales; higher numbers represent
more proscience views.
* p < .05.
*** p < .001.
Comparison of Web and Telephone Surveys 383

Table 2. Response Rates

N Rates

Total numbers dialed 12,900 —


Estimated number of working residential numbers 5,738 44.5%
No contact 2,343
Ineligible 1
Refusals 1,036
Completed screener 2,352 42.5%
Assigned to Web 1,058
Assigned to telephone 1,027
Subsampled out 267
Broke off prior to assignment 6
Main questionnaire 1,548 74.2%
Web version 546 51.6%
Telephone version 1,002 98.1%
Web access 530 97.5%
No Web access 472 98.7%

NOTE.—“No contact” includes “no answer,” “foreign language,” “hearing or other health
problems,” “away for duration,” and “call blocking.” The rates for the main questionnaire are
conditional on completion of the screener.

Did this differential rate of nonresponse affect the composition of the two
samples of Internet users? Table 3 shows the demographic makeup of the two
samples and compares both to data from the CPS. (The figures from the
Practicum Survey in table 3 are unweighted; the weights attempt to compen-
sate for the effects of nonresponse and bring the figures for the two samples of
Internet users even closer.) Despite the big differences in the response rates,
we observe no significant differences in the makeup of the two samples of
Internet users by sex, race, Hispanic origin, education, or age. We also com-
pared the two samples on the number of college-level science courses they
reported and observed no difference between the groups.
The first column of CPS figures is for adult (18 and older) Internet users
only and is based on data from the September 2001 CPS, which included
items to identify Internet users; the final column in the table gives 2003 CPS
figures for the entire adult U.S. population. Both Practicum samples give a
reasonable picture of the Web population. The average absolute deviation
between the Web sample percentages and the CPS percentages for Web
respondents is 5.0 percent; the comparable figure for the telephone sample is
4.2 percent. (Because the CPS figures for the Internet users are from Septem-
ber 2001, they may be a bit out of date.) By contrast, neither sample represents
the general U.S. population very well; both samples depart by about 9 percent
on average from the 2003 CPS percentages for the U.S. adult population.
384 Fricker et al.

Table 3. Selected Characteristics of Two Samples of Web Users and


Internet Population

Practicum Samples—Web Users CPS Data

Web Telephone Significance Adult Web U.S. Adult


Completes Completes Test for Users Population
(N = 546) (N = 530) Difference (2001) (2003)

Male 40.6% 42.3% χ2 = 0.34 48.0% 48.9%


Female 59.4 57.7 (df = 1) 52.0 51.1
White 89.2 90.1 χ2 = 0.20 86.1 80.7
Black 6.6 6.1 (df = 2) 8.6 12.5
Other 4.1 3.8 5.3 6.8
Hispanic 4.9 3.7 χ2 = 1.06 6.2 13.8
Not Hispanic 95.1 96.3 (df = 1) 93.8 86.2
High school or less 18.8 21.1 χ2 = 1.03 30.5 48.2
Some college 33.0 33.0 (df = 2) 33.0 27.1
College graduate 48.2 45.9 36.5 24.7
18–34 27.1 30.1 χ2 = 3.19 37.8 31.4
35–44 25.0 25.1 (df = 3) 25.4 20.7
45–54 22.4 23.6 20.9 18.9
55 and older 25.6 21.2 15.8 29.0

MODE DIFFERENCES IN SUBSTANTIVE ESTIMATES: SCIENCE ATTITUDES


A N D K N O WL E D G E

We compared the Internet users interviewed by telephone with those who


completed the survey via the Web on the substantive variables from the survey.
Controlling for the main effects of the demographic variables (sex, race, His-
panic origin, education, age, and the number of college science courses, as
coded in tables 1 and 3), we compared the two mode groups on our indices of
attitudes toward science and support for scientific research, on science know-
ledge scores, and on views about global warming and medical uses of
marijuana. Whether we look at weighted or unweighted results, none of the
attitudinal variables showed any significant differences across modes.
By contrast, the Web respondents had considerably higher knowledge
scores than the telephone respondents, getting an (unweighted) average of
70.1 percent correct versus 63.9 percent for those who completed the ques-
tions on the telephone. The weighted and unweighted analyses revealed a sim-
ilar pattern of differences for the knowledge scores, and we focus on the
unweighted results here. In the unweighted analysis, the overall difference in
knowledge scores by mode is highly significant—F(1,1000) = 45.9, p < .001.
Comparison of Web and Telephone Surveys 385

(Additional analyses revealed no interactions between mode and our demo-


graphic control variables.)
As figure 1 shows, the mode difference seemed to vary somewhat by the
format of the knowledge questions. The top panel of the figure shows the dif-
ferences across the three item formats. The differences (adjusted for the demo-
graphic control variables) are largest for the four open-ended knowledge
questions (on average, the Web respondents got 50.4 percent of these items
correct versus 42.0 percent for the telephone respondents) and smallest for the
eight true/false items (74.5 versus 70.0 mean percent correct). According to a
mixed model analysis of variance that included mode of data collection, item
type, and the demographic variables as factors, there is a significant mode by
item type interaction F(2,2000) = 3.58, p < .05. A similar analysis reveals
small but significant differences by item content (results not displayed). As
we had hypothesized, the difference between the Web and telephone results is
smallest for the least demanding items, those in the true/false format, and larg-
est for the most demanding items, which called for open-ended answers; the
results for the multiple choice items fall between those of the other formats.

COMPLETION TIMES

Among the Internet users, we expected those who completed the questionnaire
via the Web to take longer than those who completed the questionnaire over the
telephone; Web respondents could control the pace at which they did the survey,
and we expected them to complete the questions more slowly than those who

85

75
Telephone
Mean Percent Correct

Web
65

55

45

35
True-False Multiple Choice Open
Item Format

Figure 1. Mode differences in science knowledge scores, by item format.


386 Fricker et al.

were responding to a telephone interviewer. Consistent with this hypothesis, we


found that completion times for the main questionnaire varied substantially by
survey condition. For the Web users who completed the survey by telephone,
the mean length of the interviews was 20.4 minutes (the median was 19.6 min-
utes). Those who completed the survey online took about three minutes longer
on average (mean of 23.8 minutes, median of 21.6 minutes). Controlling for the
demographic differences between the two samples, we found the mode differ-
ence in completion times to be highly significant—F(1,875) = 55.9, p < .001.3
We expected that the differences in completion times between the two
modes might vary by the respondent’s age and fit a more complex model that
included both the main effects of the demographic control variables and their
interactions with the mode of data collection. As we expected, mode inter-
acted with age, with the respondents who were 55 years old or older showing
the largest mode difference (see figure 2)—F(3,863) = 4.50, p < .001. Put

1800

1600
Completion Times (Seconds)

1400

1200
Telephone
Web
1000

800

600
18-34 35-44 45-54 55+
Age Group

Figure 2. Mean completion times (in seconds) for Web users, by mode and
age group.

3. All of the results presented in this section are unweighted. In a separate analysis we compared
the completion times for those who reported no Internet access to the telephone respondents with
Internet access, controlling for the main effects of the demographic variables (sex, age, race,
Hispanic origin, education, and number of college science courses); there was no difference in the
overall completion times for the two groups of telephone respondents. The overall mean for the
Web nonusers was 20.8 minutes (versus 20.4 for the Web users who completed the main
questionnaire by telephone).
Comparison of Web and Telephone Surveys 387

another way, the relation between completion times and age seemed far
steeper among the Web users who completed the survey online than among
those who did the interview by telephone. Our analysis uncovered one other
significant interaction effect. On the telephone, male Web users completed the
survey about 27 seconds faster than female Web users, but on the Web, the
females were faster by about 100 seconds on average. We are not sure what to
make of this interaction of mode and sex.
The overall time difference by mode amounted to more than three minutes
on average, but virtually all of this difference is accounted for by two sections
of the questionnaire that included several open-ended knowledge questions.
(In fact, the Web-telephone difference for these two sections is a little more
than four minutes on average.) The Web respondents had to type in their
answers to the open-ended questions, and apparently that was a lot slower
than simply saying them aloud to a telephone interviewer.4 For one section of
the questionnaire that included only closed attitude items, the Web respon-
dents were actually significantly faster than their telephone counterparts
(4.4 minutes on average for the Web respondents versus 5.6 minutes for the
telephone respondents)—F(1,868) = 14.7, p < .001.

RESPONSE QUALITY

We examined three measures of response quality—“don’t know” or “no opin-


ion” responses, acquiescence, and nondifferentiation—that Krosnick (1991) has
identified as signs of satisficing on the part of survey respondents. Again, we
compared Web users who answered online with Web users who completed the
main questionnaire by telephone, using unweighted analyses. We estimated the
propensity to give nonsubstantive answers by counting the number of “don’t
know” or “no opinion” (hereinafter DK) responses that each respondent gave
across 32 or 33 items (depending on the version of the questionnaire). Similarly,
to measure acquiescence, we calculated the proportion of “agree” (either
“strongly” or “somewhat”), “support” (again, either “strongly” or “somewhat”),
and “yes” responses each respondent gave on 27 attitude items, excluding items
on which the respondent gave a DK response. Finally, we looked at four batter-
ies of questions, each of which included questions about multiple items using
the same response format. (The Web questionnaire presented these four batter-
ies as grids.) For each one, we calculated the mean of the root of the absolute
differences in the answers between all pairs of items, an index used by Chang
and Krosnick (2003) to measure nondifferentation. Lower scores indicated less
differentiation in the responses across the items in the battery.
The respondents to the Web survey were less likely to give DK responses
than the Web users who answered over the telephone. In fact, across the 90 items

4. The open-ended answers were almost equally long in the two modes, averaging 16.4 words for
the Web users who completed the survey by telephone and 16.6 for the Web respondents. This
difference was not statistically significant.
388 Fricker et al.

we looked at, only 14.6 percent of the Web respondents gave DK responses to
any of them; the corresponding figure for the telephone respondents was 52.3
percent. The difference was highly significant in a logit analysis that included
the demographic control variables (χ2 = 139.6, df = 1, p < .001). The Web ver-
sion of the questionnaire did not display DK options, and when respondents left
an item blank, it displayed a confirmatory screen that gave the respondent a sec-
ond chance to enter a response. Telephone interviewers accepted DK responses
when respondents volunteered them without any probing.
There were no differences in the proportion of acquiescent responses between
those who responded to the items on the Web and those who answered by tele-
phone (an average of 62.9 percent of the answers were in the acquiescent direc-
tion for the Web respondents versus 62.3 for the telephone respondents). If we
restrict the analysis to the items that used an agree/disagree format, there was still
no apparent difference between the two samples of Web users in acquiescence.
On all four batteries of items, the Web users who completed the questions
online gave less differentiated responses to the items than the Web users who
responded to the same questions over the telephone. For all four items, these
differences were significant after controlling for the demographic differences
between the two samples. The Web respondents were also significantly more
likely to give identical answers to every item in at least one of the four batter-
ies than the telephone respondents (21.8 percent of the Web respondents gave
“straight line” responses to at least one of the four batteries versus 14.2 per-
cent of the telephone respondents). The difference was significant in a logit
analysis that included the demographic controls (χ2 = 6.56, df = 1, p < .05).

Discussion
Our analyses were organized around five hypotheses. First, we compared Web
users with nonusers, expecting to find the usual differences that constitute the
“digital divide”; that is, we expected that the Web users would be younger,
less racially diverse, and better educated than those without access to the Web.
The results are in line with these expectations. What our results add to the pre-
vious picture is that those with access to the Web expressed more support for
scientific research, reported more positive views about science, and scored
better on the science knowledge items (see table 1).
Second, in comparing the two samples of Web users, we expected higher
response rates to the telephone interview than to the Web survey. In line with
most previous studies, we got a much higher overall response rate in the tele-
phone interview than the Web survey—the response rates for the telephone
samples were nearly double that of the Web sample. Because all the respon-
dents were contacted initially by telephone and were offered incentives for
participating, we got a relatively high response rate even for the Web survey
(the overall response was over 20 percent, taking into account nonresponse to
Comparison of Web and Telephone Surveys 389

the screener as well as the main questionnaire). In terms of sample representa-


tiveness, we found that the Web users who responded by telephone match
CPS figures for the population of adult Web users a little more closely than
Web users who completed the survey online, a difference that presumably
reflects greater nonresponse error in the Web sample (see table 3). Still, the
two samples of Web users do not differ markedly in demographic composi-
tion, and both samples of Web users do a poor job at representing the overall
population of adults, departing from CPS figures for the adult general popula-
tion by an average of 9 percent.
Third, in terms of the substantive variables of interest to the National
Science Foundation, the Web users who completed the survey online seemed
to know more about science than those who completed the survey on the tele-
phone, getting an average of 6 percent more of the knowledge items correct.
The two samples did not, however, seem to differ in their attitudes toward sci-
ence and technology. We hypothesized that the mode difference in knowledge
scores might be due to processing deficits that resulted from the relatively fast
pace of telephone administration compared to the Web survey. We find some
support for the conjecture that Web administration reduces cognitive burden
in the fact that the Web advantage in knowledge scores is greater for the high
burden formats—the multiple choice and open-ended items than for the true/
false knowledge questions.5 The mode difference in knowledge scores may
reflect the lower response rate among the Web respondents. We tried to control
for any background differences between the two mode groups by controlling
for education and the number of science courses respondents reported having
taken, but we cannot rule out the possibility of other differences between the
groups due to nonresponse.
Our fourth hypothesis concerned the time to complete the questions. As
predicted, the Web users took longer to complete the Web version of
the questionnaire than the telephone interview. This difference seemed
mainly to reflect the burden of completing the open-ended knowledge
questions online. Apparently, it took longer for respondents to type in
their answers to these questions than to say them to the interviewer. Mode
also interacted with age in affecting completion times; there was only a
small difference between modes in completion times for the younger Web
users. This underscores the role that cognitive skills play in moderating

5. It is, of course, possible that Web respondents were able to look up some of the answers online
as they completed the survey. There are two reasons we do not think that happened. First, one of
the students in the practicum classified the questions according to how easy it was to look up the
answers online; this classification of the items reflected the experiences of several of his class-
mates, who tried to look up the answers. He found that the difference between Web and telephone
respondents was similar for the two groups of items; the Web respondents did just as much better
than the telephone respondents with the items for which it was difficult to look up the answers as
with those for which it was easy to find the answers online. Second, the Web-telephone difference
was especially marked for the open-ended items, which did not test simple facts that could have
been easily verified online.
390 Fricker et al.

mode effects and suggests that the combination of a slower pace and
visual administration of the questions offered by the Web survey may
have the most impact on respondents with less cognitive capacity. Younger
respondents may also have an advantage in general computer literacy and
typing skills that make the Web a more suitable mode of data collection
for them.
Our final hypothesis concerned several indirect indicators of data quality,
including item nonresponse, acquiescence, and nondifferentiation. The Web
users who completed the questions online were much less likely to leave items
blank than their counterparts who completed the survey over the telephone;
unlike the telephone interviewers, who accepted DK responses when respon-
dents volunteered them, the online survey prodded respondents who left the
attitude items blanks. On the other hand, Web respondents also gave less dif-
ferentiated responses to batteries of questions that offered the same set of
response options. The two samples did not differ in their tendency to agree
with attitude statements.6 Together these findings offer only limited support to
the notion that Web data collection increases data quality relative to the tele-
phone (Chang and Krosnick 2003).
Any mode comparison study necessarily examines a blend of factors that
include both intrinsic features of the modes being compared, more incidental
features that reflect the design choices involved in implementing each mode,
and features that are essentially unrelated to the modes of data collection
themselves but that nonetheless covary with them. In our study some aspects
of the contrast between the telephone and Web versions of the questionnaire
reflect inherent properties of the two modes. For instance, the telephone
respondents heard the questions and responded orally, whereas the Web
respondents read the questions and responded via the keyboard. These seem to
us to be fundamental features of the two modes. By contrast, the difference in
the treatment of “no opinion” responses reflected an incidental design deci-
sion on our part rather than an intrinsic feature of either mode. The Web ver-
sion of the questionnaire, in effect, prompted respondents for an answer
whenever they left a question blank, whereas the telephone interviewers
accepted “no opinion” responses without probing them. It was probably this
design decision rather than any inherent difference between the modes that
accounted for the higher rate of “no opinion” responses in the telephone con-
dition. Finally, we tried to equalize the response rates by mode group by offer-
ing a larger incentive to the cases assigned to complete the questions online.
There was still a substantial difference in response rates (97.5 percent of the

6. In an additional set of analyses we do not describe in detail, we found no differences in the


concurrent validity of the data from the two modes. For example, the number of college science
courses did not predict science knowledge scores any better in the Web survey than in the tele-
phone survey. Attitudes toward scientific research did not correlate more strongly with proscience
attitudes in one mode or the other.
Comparison of Web and Telephone Surveys 391

Internet users assigned to the telephone version completed the questionnaire


versus 51.6 percent of those assigned to the online version). Although we did
not observe any difference in the demographic characteristics or educational
backgrounds of the two samples of Internet users, it is quite possible there are
unobserved differences between them. In addition, our Web-telephone com-
parison confounds the mode of data collection with the size of the incentive
respondents got.
Our results suggest that differences between online and telephone surveys
depend in part on the type of questions being compared. We observed the
clearest differences between modes with cognitively demanding knowledge
questions (where Web respondents were more likely to give correct answers)
and with batteries of attitude questions presented online (where the Web
respondents gave less differentiated answers). Similarly, the relatively slow
completion times for the Web version of the survey seemed mainly to reflect
the difficulty of entering answers to the open-ended questions online. Still,
regardless of item format, the Web respondents were more likely to answer
the knowledge questions correctly than those who answered by telephone.
We suspect that the knowledge questions were generally easier when they
were presented visually and when respondents could answer them at their
own pace.
Our findings about the interaction between mode and question type echo
those of Schonlau and his colleagues (Schonlau et al. 2003). They found sig-
nificant differences for 29 of the 37 questions they administered in parallel
Web and RDD surveys about health. In their study differences by mode were
more likely to emerge with attitude questions than with factual questions
about health (see also Taylor 2000, who argues that Web respondents avoid
selecting the most extreme response categories in answering attitude ques-
tions). Web surveys make somewhat different cognitive demands from tele-
phone surveys, and this seems to affect how certain types of people answer
certain types of questions.

References
American Association for Public Opinion Research (AAPOR). 2000. Standard Definitions: Final
Disposition of Case Codes and Outcome Rates for Surveys. Lenexa, KS: AAPOR.
Bason, James J. 2000. “Comparison of Telephone, Mail, Web, and IVR Surveys.” Paper presented
at the annual meeting of the American Association for Public Opinion Research, Portland, OR.
Batagelj, Zenel, Katja Lozar Manfreda, Vasja Vehovar, and Metka Zaletel. 2000. “Cost and
Errors of Web Surveys in Establishment Surveys.” Paper presented at the International Confer-
ence on Establishiment Surveys II, Buffalo, NY.
Best, Samuel J., Brian Krueger, Clark Hubbard, and Andrew Smith. 2001. “An Assessment of the
Generalizability of Internet Surveys.” Social Science Computer Review 19: 131–45.
Chang, LinChiat, and Jon Krosnick. 2003. “National Surveys via RDD Telephone Interviewing
vs. the Internet: Comparing Sample Representativeness and Response Quality.” Manuscript
under review.
Cook, Colleen, Fred Heath, and Russel L. Thompson. 2000. “A Meta-Analysis of Response Rates
in Web- or Internet-Based Surveys.” Educational and Psychological Measurement 60: 821–36.
392 Fricker et al.

Couper, Mick P. 2000. “Web Surveys: A Review of Issues and Approaches.” Public Opinion
Quarterly 64: 464–94.
———. 2002. “New Technologies and Survey Data Collection: Challenges and Opportuni-
ties.” Paper presented at the International Conference on Improving Surveys, Copenhagen,
Denmark.
Crawford, Scott, Sean McCabe, Mick P. Couper, and Carol Boyd. 2002. “From Mail to Web:
Improving Response Rates and Data Collection Efficiencies.” Paper presented at the Inter-
national Conference on Improving Surveys, Copenhagen, Denmark.
Dillman, Don A. 2002. “Navigating the Rapids of Change: Some Observations on Survey Meth-
odology in the Early 21st Century.” Draft of presidential address presented at the annual meeting
of the American Association for Public Opinion Research, St. Petersburg, FL.
Groves, Robert M. 1989. Survey Errors and Survey Costs. New York: John Wiley and Sons.
Gunn, Holly. 2002. “Web-Based Surveys: Changing the Survey Process.” First Monday 7(2).
http://www.firstmonday.dk/issues/issue7_12/gunn/ (accessed April 10, 2004).
Krosnick, Jon A. 1991. “Response Strategies for Coping with the Cognitive Demands of Attitude
Measures in Surveys.” Applied Cognitive Psychology 5: 213–36.
Krosnick, Jon A., and Matthew Berent. 1993. “Comparisons of Party Identification and Policy
Preferences: The Impact of Survey Question Format.” American Journal of Political Science
37: 941–64.
Manfreda, Katja, and Vasja Vehovar. 2002. “Survey Design Features Influencing Response Rates
in Web Surveys.” Paper presented at International Conference on Improving Surveys, Copen-
hagen, Denmark.
Miller, Jon D., and Pardo, R. 2000. “Civic Scientific Literacy and Attitudes to Science and
Technology: A Comparative Analysis of the European Union, the United States, Japan, and
Canada.” In Between Understanding and Trust: The Public, Science, and Technology,
ed. Meinolf Dierkes and Claudia Von Grote, pp. 81–129. Amsterdam: Harwood Academic
Publishers.
National Telecommunications and Information Administration (NTIA). 2000. Falling Through
the Net: Toward Digital Inclusion.Washington, DC: NTIA. Available online at http://
www.ntia.doc.gov/ntiahome/fttn00/contents00.html (accessed April 15, 2004).
———. 2002. A Nation Online: How Americans Are Expanding Their Use of the Internet. Wash-
ington, DC: Available online at http://www.ntia.doc.gov/ntiahome/dn/ (accessed April 15,
2004).
Schonlau, Matthias, Beth J. Asch, and Can Du. 2003. “Web Survey as Part of a Mixed-Mode
Strategy for Populations That Cannot Be Contacted by E-Mail.” Social Science Computer
Review 21: 218–22.
Schonlau, Matthias, Kinga Zapert, Lisa P. Simon, Katharine Sanstad, Sue Marcus, John Adams,
Mark Spranca, Hongjun Kan, Rachel Turner, and Sandra Berry. 2003. “A Comparison between
Responses from a Propensity-Weighted Web Survey and an Identical RDD Survey.” Social
Science Computer Review 21: 1–11.
Taylor, Humphrey. 2000. “Does Internet Research Work?” International Journal of Market
Research 42: 51–63.
Thornberry, Owen T., and James T. Massey. 1988. “Trends in United States Telephone Coverage
across Time and Subgroups.” In Telephone Survey Methodology, ed. Robert M. Groves, Paul P.
Biemer, Lars E. Lyberg, James T. Massey, William L. Nicholls II, and Joseph Waksberg, pp. 25–49.
New York: John Wiley and Sons.
Tourangeau, Roger. 2004. “Survey Research and Societal Change.” Annual Review of Psychology
55: 775–801.
Tourangeau, Roger, Mick P. Couper, and Frederick G. Conrad. 2004. “Spacing, Position, and
Order: Interpretive Heuristics for Visual Features of Survey Questions.” Public Opinion Quar-
terly 68: 368–93.
Tourangeau, Roger, Lance J. Rips, and Kenneth Rasinski. 2000. The Psychology of Survey
Response. Cambridge: Cambridge University Press.
Waksberg, Joseph. 1978. “Sampling Methods for Random Digit Dialing.” Journal of the American
Statistical Association 73: 40–46.
Wiebe, Elizabeth F., Joe Eyerman, and John D. Loft. 2001. “Evaluating Nonresponse in a Web-
Enabled Survey on Health and Aging.” Paper presented at the annual meeting of the American
Association for Public Opinion Research, Montreal.

View publication stats

You might also like