Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

J Clin Epidemlol Vol. 50, No. 10, pp. 1129-1136, 1997 0895-4356/97/$17.

00
Copyright 0 1997 Elsevier Science Inc. PI1 SO895-4356(97)00126.1

ELSEVIER

Response Rates to Mail Surveys


Published in Medical Journals
David A. Asch, ’ s2,*M. Kathryn Jedrziewski, 2 and Nicholas A. Christakis3~4~**
‘VETERANS AFFAIRS MEDICAL CENTER, PHILADELPHIA, PENNSYLVANIA 19104, ‘DIVISION OF GENERAL INTERNAL MEDICINE,
UNIVERSITY OF PENNSYLVANIA SCHOOL OF MEDICINE, PHILADELPHIA, PENNSYLVANIA 19104-2676,
‘SECTION OF GENERAL INTERNAL MEDICINE, UNIVERSITY OF CHICAGO, CHICAGO, ILLINOIS 60637, AND
4D~~~~~~~~~ OF SOCIOLOGY, UNIVERSITY OF CHICAGO, CHICAGO, ILLINOIS 60637

ABSTRACT. Objective. The purpose of this study was to characterize response rates for mail surveys published
in medical journals; to determine how the response rate among subjects who are typical targets of mail surveys
varies; and to evaluate the contribution of several techniques used by investigators to enhance response rates.
Methods. One hundred seventy-eight manuscripts published in 1991, representing 321 distinct mail surveys, were
abstracted to determine response rates and survey techniques. In a follow-up mail survey, 113 authors of these
manuscripts provided supplementary information. Results. The mean response rate among mail surveys published
in medical journals is approximately 60%. However, response rates vary according to subject studied and tech-
niques used. Published surveys of physicians have a mean response rate of only 54%, and those of non-physicians
have a mean response rate of 68%. In addition, multivariable models suggest that written reminders provided
with a copy of the instrument and telephone reminders are each associated with response rates about 13% higher
than surveys that do not use these techniques. Other techniques, such as anonymity and financial incentives,
are not associated with higher response rates. Conclusions. Although several mail survey techniques are associated
with higher response rates, response rates to published mail surveys tend to be moderate. However, a survey’s
response rate is at best an indirect indication of the extent of non-respondent bias. Investigators, journal editors,
and readers should devote more attention to assessments of bias, and less to specific response rate thresholds. j
CLIN EPIDEMIOL 50;10:1129-1136, 1997. 0 1997 Elsevier Science Inc.

KEY WORDS. Cost-effectiveness, data collection, epidemiology, financial incentives, health services research,
mailed survey, research design, response bias, survey methods

INTRODUCTION a survey, investigators are invariably faced with the problem


of evaluating the magnitude of their response rate. Our
Mailed surveys are a useful tool for examining attitudes and
study had three goals. First, we wanted to characterize the
behavior in health-care settings. However, mail surveys
range and central tendency of response rates for mail surveys
need high response rates to be successful. First, a high re-
and, in so doing, provide some standards against which in-
sponse rate typically lowers the cost per response and the
vestigators, editors, and readers can judge the results. Sec-
cost necessary to achieve a sufficient sample. Second, a high
ond, we wanted to determine how the response rate varies
response rate reduces the extent or possibility of non-re-
for different subjects that are typical targets of mail surveys.
spondent bias, or the possibility that responders are not a
And third, we wanted to evaluate the contribution of various
representative sample. Third, journal editors and readers
techniques used by investigators to enhance response rates.
may interpret the response rate as an indicator of the possi-
ble extent of non-respondent bias.
Given these concerns, what are the typical response rates METHODS
of mail surveys published in medical journals? In conducting
To achieve a reproducible and comprehensive sample of
published mail surveys, in January, 1993, we used Medline
*Address for correspondence: David A. Asch, M.D., Division of General
Internal Medtcine, 317 Ralston-Penn Center, 3615 Chestnut Street, Phila- to identify articles captured with the following search strat-
delphia, PA 19104-2676. egy: (mail or postal) and (questionnaire or survey or instru-
“Address for reprint requests: Nicholas A. Christakis, M.D., Division
ment). We limited our search to articles published in
of General Internal Medicine, Untversity of Chicago Medical Center,
5841 S. Maryland Avenue-MC 6098, Chicago, IL 60637. United States journals in 1991. This search identified 219
Accepted for publication on 11 June 1997. articles.
1130 D. A. Asch et al.

Published Information Analysis


Every article identified by this strategy was read by two Using this value, we determined the mean response rates
trained readers who abstracted the information indepen- for published surveys, and bivariable and multivariable asso-
dently. Then the articles and the abstract forms were read ciations of survey characteristics with response rates. For
by a third abstractor who settled disagreements. multivariable analyses, we used ordinary least squares regres-
sion using response rate as the dependent variable. Because
this variable is a percentage bounded by 0 and lOO%, we
Author lnformation also used as a dependent variable a logit transformation of
response rate (RR) defined as log(RR/l -RR).
Because specific information about survey methods is often
We handled missing data for independent variables in
not included within published manuscripts, we asked the
the multivariable analyses with the method of Cohen and
authors of the published papers to supplement or correct
Cohen [ 11: we assigned an imputed mean for missing values
the information we had abstracted. To do so, in March,
and used dummy variables to reflect the fact that some data
1995 we mailed a copy of the completed abstraction form
are missing.
to the corresponding author for each manuscript or to the
first author if no corresponding author was mentioned.
Mailings included a postage-paid return envelope and a Assessment of Sampling
cover letter providing instructions. We sent authors of pa- To assess the completeness and representativeness of our
pers reporting more than one survey separate abstraction search strategy, we randomly selected 10 journals from
forms reflecting each of the distinct surveys. We also sent among the list of journals that had published mail surveys
authors a separate one-page questionnaire inquiring uncovered through our electronic search. We also randomly
whether they felt their response rate was adequate; whether selected an additional 10 U.S. medical journals catalogued
editors or reviewers of the manuscript commented on the by Medline but not appearing on our original list. We re-
response rate; how many journals they submitted the manu- viewed each issue of these 20 journals for calendar year 199 1
script to before it was accepted; and their opinion of the to identify any mail surveys meeting our criteria that had
prestige of the journal that ultimately published the paper. not appeared in the original search. This process identified
Authors who did not respond to the March mailing by May 22 new mail surveys: 18 from journals that were previously
were sent a complete second mailing. on our list, and 4 from journals not on our list. The response
Eighteen of the 219 authors of manuscripts could not rates among these surveys and those in the original sample
be contacted because of unknown addresses or, in some were similar (p = 0.16).
cases, because of death. Of the remaining 201 authors,
113 (56%) responded to the requests for additional informa-
RESULTS
tion. There was no difference between the response rates
Final Sample
for the main or sole survey published by authors who pro
vided additional information and authors who did not The 2 19 articles represented 401 potential distinct mail sur-
(p = 0.94). veys. Seventy-six of these were excluded by the abstracters
because: they were not performed in the US. (26); they
were not mail surveys (46); they were not original reports
Definition of Response Rate (1); or for miscellaneous reasons (3). An additional four
surveys were excluded because the authors, when surveyed,
Because there are many potential sources of information
indicated they were performed outside the U.S. The final
from which to determine a response rate for each of the
data set reflected 178 articles published in 111 different
surveys in our study, we developed a hierarchical process to
journals representing 321 distinct mail surveys. Table 1 re-
settle on a unique figure. This variable was set equal to the
ports the respondents of these 321 surveys. The “other” cat-
ratio of the number of surveys returned to the number of
egory encompasses a heterogeneous group that includes, for
surveys distributed using the figures as published in the
example, families of people with psychiatric disorders, den-
manuscript. (As discussed below, there are often legitimate
tal receptionists, parents of adopted children, women over
reasons to use other ratios to calculate the response rate.)
35, spouses of male physicians, and members of various soci-
If the numerator or denominator was absent from the manu-
eties and support groups. Table 2 reports the journals pub-
script, the author’s value for either of these figures from the
lishing three or more included mail surveys during 1991.
separate author survey was substituted, if available. If the
numerator or the denominator was still indeterminate, the
Calculation of Response Rates
published response rate was used. Finally, if no response rate
was published, the response rate reported in the author sur- Only 192 surveys (60%) included a report of the response
vey was used, if available. rate achieved; 261 (81%) indicated how many surveys were
Mail Survey Response Rates 1131

TABLE 1. Respondents of 321 mail surveys


Mean response
Number of rate (SD) Comparison with
surveys in data among surveys surveys not using
set using these using these these respondents
Respondent respondents respondents (%) (PI”
Physician 68 54 k 17 0.001
Dentist 25 65 k 9 0.8
Nurse 24 61 2 23 0.8
Other health-care worker 41 56 -c 24 0.1
Administrator or official
representative 43 72 ? 18 0.002
Patient or parent of patient 42 60 ” 21 0.01
Health-care worker student 3 79 2 13 0.1
Other 75 60 t 22 0.6
Total 321 62 ? 21

Abbreviation: SD = standard deviation.


“By t-test.

distributed; 197 (61%) indicated how many surveys were Among the 192 surveys that reported a response rate, the
received; and 176 (55%) indicated both how may were dis- mean response rate was 59% -C 20% (median 59%). When
tributed and how many were received so that a response additional information provided by authors was included,
rate could be calculated independently by a reader. All to- 210 surveys had a mean response rate of 59% ? 20% (me-
gether, 96 surveys (30%) provided neither a report of the dian 58%). When information from all sources was exam-
response rate nor the information necessary to calculate ined (the manuscript’s reported response rate, the author’s
one. Some of these surveys may have represented minor ele- additional information on response rates, or the calculated
ments of the manuscript: for example, pre-tests of surveys. response rate from information provided either in the
However, these proportions were little better when these manuscript or by the author), 236 surveys had a mean re-
minor elements were ignored and only the major or sole sponse rate of 62% f 21% (median 62%). Figure 1 reveals
survey within the 178 manuscripts was considered. For ex- the distribution of these response rates.
ample, only 141 of these surveys (79%) included a report As described with our methods, we used a hierarchical
of the response rate and in only 135 (76%) could a response approach to assign a response rate to a survey. Response
rate be calculated; a total of 2 1 manuscripts ( 12%) provided rates reported in manuscripts often differed from the re-
neither a report of a response rate nor the means to calculate sponse rate calculated by dividing the number of surveys
one. received by the number distributed. Many of these differ-
ences reflected adjustments to account for surveys consid-
ered unusable-either because they were returned by the
TABLE 2. Journals publishing three or more mail surveys in post-office as undeliverable, or because the subjects failed
1991 to meet study criteria. However, there was also great incon-
Journal Frequency sistency and confusion about how to make these adjust-
ments. Some authors deleted unusable surveys from the nu-
American Journal of Hospital Pharmacy 9
merator, effectively lowering their reported response rate.
Journal of the American Board of Family Practice 9
Academic Medicine 8 Others deleted unusable surveys from the denominator, rais-
Journal of Family Practice 6 ing their reported response rate. For example, one manu-
Annals of Emergency Medicine 5 script described 249 responses to a survey of 764 surgeons
Journal of the American Geriatrics Society 5
as a response rate of 65% (rather than 33%) by using a
American Journal of Epidemiology 4
denominator of 384 representing a subset of the original
American Journal of Public Health 4
E@emiology 4 sample with certain desired practice characteristics. Simi-
Annals of Internal Medicine 3 larly, another survey described 488 responses to a survey of
Family Medicine 3 1100 emergency medicine trainees as a response rate “con-
Journ~~l of the American Medical Associntion 3 servatively estimated to between 50% and 55%,” because
Journal of General Internal Medicine 3
the mailing list used was old and included subjects not of
Journal of Nursing Education 3
Pediatrics 3 interest.
Eighty-two of 321 surveys (26%) explicitly reported
1132 D. A. Asch et al.

16%

14%

12%

FIGURE 1. Histogram of re-


sponse rates from subject
studies.

6%

4%

2%

0%
O-10 11-20 21-30 31-40 41-50 51-60 61-70 71-80 81-90 91-100
Response Rate (%)

whether a test for non+respondent bias was performed, and when length was measured in number of questions (p =
authors provided information for another 41 surveys. Of 0.349, p = 0.08) or number of minutes required for comple-
these 123 surveys, 30 (24%) did not test for non-respondent tion (p = -0.013, p = 0.9). Similarly, written reminders
bias; 66 (54%) compared information from responders with provided without an instrument were not associated with
known information about the underlying sample; 13 (11%) higher response rates.
tested for bias directly (for example, by performing a focused No associations with response rates were found for several
re-survey of a sample of initial non-responders and compar- other variables, including presence or amount of a financial
ing results with those from the original respondent group); incentive, mean age of respondents, proportion of female
and 11 (9%) used both techniques. respondents, or type of outgoing or return postage. How-
ever, because so few surveys used financial incentives, and
because so few manuscripts provided information about
Factors Associated with ResponseRates postage, we had limited power to detect differences in re-
Table 3 reports bivariable associations of survey characteris- sponse rates associated with these variables.
tics with response rates. Higher response rates were associ- Table 4 reports the results of a regression model pre-
ated with surveys of non-physicians, as shown also in Figure dicting response rate as a function of several independent
2. As shown in Table 1, physicians had the lowest mean variables suggested in the bivariable analyses. The model
response rate among all groups examined. In addition, re- reveals that surveys of physicians have a response rate that is
sponse rates were lower in surveys if the surveys were anony- 9.5 percentage points lower than surveys of non-physicians,
mous, and were higher if they used any written reminder adjusting other factors (p < 0.001). Providing one or more
with an instrument or any telephone reminder. Although, written reminders with an instrument, or one or more tele-
surprisingly, surveys with more pages had higherresponse phone reminders, increases the response rate by 13.8 (p =
rates (p = 0.253, p = O.OOS), this effect was not significant 0.001) and 13.8 (p = 0.003) percentage points, respectively.
Mail Survey Response Rates 1133

TABLE 3. Bivariable associations of selected characteristics with mean response rates


Mean response rate

With this Without this


characteristic characteristic
Characteristic b) (4 P”

Physician subject 54% (56) 68% (180) <O.OOl


Anonymous survey 52% (71) 68% (20) 0.002
Any mailed reminder without an instrument 60% (59) 54% (50) 0.154
Any mailed reminder with an instrument 64% (90) 48% (41) <O.OOl
Any telephone reminder 77% (33) 53% (60) CO.001
Any financial incentive 64% (6) 60% (70) 0.660

“By t-test.

30% -

25% - n Non physicians

iii Physicians

20% -

2
s
3 15% -
FIGURE 2. Comparison of re-
sponse rates from surveys of E
physicians and non-physi- I;:
cians.
10% -

5% -

0%
O-10 Ii- 21- 31. 41- 51- 61- 71- 81- 91-
20 30 40 50 60 70 80 90 100

Response Rate (%)

TABLE 4. Multivariate regression model of response rate”


Variable Coefficient 95% CI RZ P

Physician subject -9.5 -15.1, -3.9 0.036 0.001


Anonymous survey -9.0 -18.3, 0.4 0.054 0.06
Any mailed reminder with an instrument 13.8 6.1, 21.6 0.062 0.001
Any telephone reminder 13.8 4.8, 22.7 0.117 0.003

Abhreviatton: CI = confidence interval.


“The intercept and dummy variables used tn code for missing data are not shown. The R’ for this model is
0.27.
1134 D. A. Asch et al.

Anonymous surveys had response rates 9.0 percentage mean response rate is approximately 62%. Surveys of physi-
points lower than non-anonymous surveys (p = 0.06). After cians have lower response rates, with a mean of 54%, and
adjusting for other factors, the number of pages in the survey those of non-physicians have higher response rates, with a
was no longer associated with response rate, and this vari- mean of 68%.
able was excluded from the model presented in Table 4. Second, certain techniques and survey characteristics are
These results were largely unchanged when a logit transfor- associated with higher response rates. Previous research has
mation was applied to the dependent variable (to adjust for demonstrated that response rates increase if subjects are of-
its truncated distribution), except that the effect of ano- fered monetary incentives [2-41 or if surveys are delivered
nymity which was previously marginally significant became by certified mail or non U.S. Postal Service carriers [5].
significant at the p < 0.05 level. When the same variables These strategies, however, can be implemented only by in-
were included in a model to predict response rates in surveys creasing costs. Other investigators have demonstrated that
of physicians, only the use of a reminder with an instrument response rates can be improved by using stamped rather
was significantly associated with an increased response than metered return envelopes [6,7], different types of out-
rate-again, of about 13%. going envelopes [8] or prepaying financial incentives rather
than paying subjects on completion [9-111. These tech-
niques may improve response rates without increasing costs.
Authors Comments about Response Rates Several published meta-analyses, typically using pooled
Authors of only 15 of 130 surveys (12%) felt those surveys results from a number of survey design experiments, have
had response rates inadequate for their purposes. The mean catalogued these techniques [12- 151. In contrast, our study
response rate for the eight of these surveys for which a re- involved the review of actual survey results from a variety
sponse rate was determinate was 35%, which was signifi- of settings. This method allows us to adjust for the effects
cantly lower than the mean response rate of 59% for the of multiple variables. We found that telephone reminders
93 articles felt by authors to have adequate response rates and written reminders provided with an instrument were
(p = 0.002). When only the major or sole survey of a manu- associated with higher response rates. Of note, both inter-
script was considered, authors of only 5 of 83 surveys (6%) ventions raised response rates by about 13%. This result may
felt those surveys had inadequate response rates. be of particular value since mailed reminders are often much
Fifty-six authors responding to the author survey felt their cheaper and easier on investigators and subjects than are
study was published in a journal they rated as in the top telephone reminders. Unlike one prior study that concluded
third of its field. These studies had a mean response rate of that longer surveys yield lower response rates [16], but simi-
59%. Twenty-six authors felt the journal was in the middle lar to another [17], we found no consistent association be-
or bottom third. These studies had a mean response rate of tween survey length and response rate after adjusting for
50% (p = 0.073 for the comparison). other factors. Moreover, unlike several prior studies re-
Seventy-six authors reported receiving editorial or re- porting that financial incentives improve response rates
viewer comments regarding their manuscript prior to publi- [18-201 we did not find such association; however, we had
cation. Of these, 46 (61%) received no comments about a trend in this direction, and limited power.
their response rate; 8 were told it was low; 9 were told it We did find that anonymous surveys had lower response
was adequate; and 7 were told it was high. Six received con- rates. Although one might think that anonymity would
flicting comments from different individuals in the review make target subjects more comfortable in responding, and
process. therefore enhance response rates, target subjects might also
Sixty-seven of 81 authors (83%) reported that their feel more comfortable not responding if they know their fail-
manuscript was published in the first journal to which they ure to respond will remain undetected. Moreover, anony-
submitted it. The remaining 14 reported that their manu- mous surveys are likely to be those that are more sensitive
script was published in the second journal they tried. Of in the first place, and those more prone to non-response.
these 14, only 3 authors attributed their manuscript’s initial Third, the information an article provides about response
rejection to a poor response rate. There was no difference rates is in part a function of the editorial review process.
in the mean response rates of manuscripts accepted on the The finding that so many published studies contained insuf-
first or second try. ficient information to calculate a response rate identifies an
area for improvement in editorial standards.
Fourth, calculating a response rate is more difficult than
DISCUSSION
it may appear. The crudest measure divides the number of
In this study, we have described the response rates reported surveys received by the number sent. However, this measure
in articles published in medical journals. This study has sev- ignores several other factors that may be important in inter-
eral important findings. preting the results. Such factors include the number of sur-
First, although there is wide variation in the response veys undeliverable because of bad addresses, the number
rates for mail surveys published in medical journals, the considered unusable because the subjects fail to meet study
Mail Survey Response Rates 1135

criteria, and the number considered unusable because they tion of attempts to ascertain non-respondent bias. Similarly,
are incomplete. Authors of the surveys we studied variously some editors and readers may discredit the results of a survey
ignored these terms, or subtracted them from the denomina- with a low response rate even if specific tests limit the ex-
tor or the numerator of the response rate calculation. In tent or possibility of this bias.
different circumstances, each of these approaches might Fairer questions to ask when evaluating survey research
seem appropriate. are whether or not bias is likely to be present, whether the
There are a number of methods to compute the “true” researchers investigated this possibility, and whether any
response rate in such a mail survey. All seek to take advan- bias that could be present might meaningfully affect the
tage of the fact that not all non-respondents were actually conclusions. Focusing on these questions, rather than on
eligible for the study. Typically, information about individ- reports of response rates, is particularly important for sur-
uals known to be ineligible is used to revise the denominator veys of physicians and other groups who are especially diffi-
for response rate computation [21]. cult to recruit.
For example, consider a mail survey of 1000 subjects in
which 50 instruments are returned by the post office as un- This work was supported by agrantfrom the University of Pennsylounia
deliverable and 600 completed surveys are returned by re- Research Foundation. Dr. Asch is the recipient of a Department of
Veterans Affairs Health Services Research and Development Service
spondents. In most cases, the best measure of the response
Career Development Award. Dr. Christakis was the recipient of a
rate would reflect that only 950 subjects had the opportu- NRSA Fellowship from the Agency for Health Case Policy and Re-
nity to respond, and so the response rate could be reported search.
as 600/(1,000 - 50) = 63%. However, this adjustment We are grateful to Christine N. Weeks for research assistance.
might misrepresent the circumstances if the targets of those
50 undeliverable surveys were systematically different from
References
the others.
1. Cohen J, Cohen P. Applied Multiple Regression/Correlation
Similarly, what if only 400 of the 600 completed instru- Analysis for the Behavioral Sciences. Hillsdale, NJ: Law-
ments were completed by subjects meeting study criteria? rence Erlbaum Associates; 1975.
Reporting a figure of 400/950 = 42% would seem to under- 2. Fox RJ, Crask MR, Kim J. Mail survey response rate: A meta-
state the response rate. On the other hand, reporting a figure analysis of selected techniques for inducing response. Public
Opinion Quarterly 1989; 52: 467-491.
of 600/950 = 63% would be appropriate only given reason
3. Camuiias C, Alward RR, Vecchione E. Survey response rates
to believe that the proportion of ineligibles (I/3 in this case) to a professional association mail questionnaire. J NY State
was the same in both respondent and non-respondent pools. Nurses Assoc 1990; 21: 7-9.
Investigators might know otherwise. For example, if women 4. Church AH. Estimating the effect of incentives on mail sur-
are the population of interest, and the investigators know vey response rates: A meta-analysis. Public Opinion Quar-
terly 1993; 57: 62-79.
that women represent half of the surveyed pool, a better
5. Rimm EB, Stampfer MJ, Colditz GA, Giovannucci E, Willett
figure to report might be 400/(950 X 50%) = 94%. WC. Effectiveness of various mailing strategies among nonre-
Finally, while it is customary to present a response rate spondents in a prospective cohort study. Am J Epidemiol
for a survey as a whole, when many questionnaires are in- 1990; 131: 1068-1071.
complete it may be appropriate to report separate response 6. Choi BCK, Pack AWP, Purdham JT. Effects of mailing strate-
gies on response rate, response time, and cost in a question-
rates for individual questions of special importance or those
naire study among nurses. Epidemiology 1990; 1: 72-74.
that might be extremely susceptible to non-respondent bias. 7. Shiono PH, Klebanoff. The effect of two mailing strategies on
The level of art and interpretation in calculating response the response to a survey of physicians. Am J Epidemiol 1991;
rates reflects the indirect and therefore limited use of the 134: 539-542.
response rate in evaluating survey results. So long as one 8. Asch DA, Christakis NA. Different response rates in a trial
of two envelope styles in mail survey research. Epidemiology
has sufficient cases for statistical analyses, non-response to
1994; 5: 364-365.
surveys is a problem only because of the possibility that re- 9. Berry SH, Kanouse DE. Physician response to a mailed survey:
spondents differ in a meaningful way from non-respondents, An experiment in timing of payment. Public Opinion Quar-
thus biasing the results [22,23]. Although there are more terly 1987; 51: 102-114.
opportunities for non-response bias when response rates are 10. Weiss LI, Freidman D, Shoemaker CL. Prepaid incentives
yield higher response rates to mail surveys. Marketing News
low than high, there is no necessary relationship between
1985; 19: 30-31.
response rates and bias. Surveys with very low response rates 11. Schweitzer M, Asch DA. Timing payments to subjects of mail
may provide a representative sample of the population of surveys: Cost-effectiveness and bias. J Clin Epidemiol 1995;
interest, and surveys with high response rates may not. 48: 1325-1329.
Nevertheless, because it is so easy to measure response 12. Fox RJ, Crask MR, Kim J. Mail survey response rate: A meta-
analysis of selected techniques for inducing response. Public
rates, and so difficult to identify bias, response rates are a
Opinion Quarterly 1989; 52: 467-491.
conventional proxy for assessments of bias. In general, in- 13. Yammarino FJ, Skinner SJ, Childers TL. Understanding mail
vestigators do not seem to help editors and readers in this survey response behavior: A meta-analysis. Public Opinion
regard. As we report, most published surveys make no men- Quarterly 1991; 55: 613-639.
1136 D. A. Asch et al.

14. Yu J, Cooper H. A quantitative review of research design ef- 19. James JM, Bolstein R. Large monetary incentives and their
fects on response rates to questionnaires. J Marketing Res effect on mail survey response rates. Public Opinion Quar-
1983; 20: 36-44. terly 1992; 56: 442-453.
15. Armstrong JS, Lusk EJ. Return postage in mail surveys, A 20. Mizes S, Fleece L, Roos C. Incentives for increasing return
meta-analysis. Public Opinion Quarterly 1987; 51: 233-248. rates: Magnitude levels, response bias, and format. Public
16. Yammarino FJ, Skinner SJ, Childers TL. Understanding mail Opinion Quarterly 1984; 48: 794-800.
survey response behavior: A metazanalysis. Public Opinion 21. Aday LA. Designing and Conducting Health Surveys. San
Quarterly 1991; 55: 613-639. Francisco: Jossey-Bass; 1989.
17. Yu J, Cooper H. A quantitative review of research design ef- 22. Groves RM, Cialdini RB, Couper ME’. Understanding the de-
fects on response rates to questionnaires. J Marketing Res cision to participate in a survey. Public Opinion Quarterly
1983;20:36-44. 1992; 56: 475-495.
18. Church AH. Estimating the effect of incentives on mail sur- 23. Brennan M, Hoek J. The behavior of respondents, nonrespon-
vey response rates: A meta-analysis. Public Opinion Quar- dents, and refusers across mail surveys. Public Opinion Quar-
terly 1993; 57: 62-79. terly 1992; 56: 530-535.

You might also like