Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 8

MODULE 6: PRIMARY DATA GATHERING – of the questions, the way in which a

SURVEY specific item is constructed


 Non-Response Error - non-response
Source: Introduction to Survey Methodology
errors can stem from simple refusals to
Sampling Error - Sampling errors stem from the answer questions or they can come as a
sampling method used. Easy to deal with result of a failure to locate participants
mathematically and can be relatively estimated who were originally sampled when it
and resolved by increasing the sample size comes time to complete the study. Non-
response can be for the entire survey,
 Researchers must initially identify their but it can also be for specific
population of interest and then clearly questionnaire items.
define their unit of analysis and what
elements will best serve the aim of their Total Survey Error
study.
 combined total of both sampling and
 By using non-probability sampling, bias
non-sampling errors, should be the
will naturally be introduced into the
dominant paradigm for developing,
research, and no generalization will be
analyzing, and understanding surveys
possible.
and their results
 Using probability sampling that relies on
 The sum of the squares of both the
the principle of randomness will provide
sampling error and the non-sampling
a more representative sample, one that
error is equal to that of the squared
better reflects the target population
total survey error—in short, the TSE
and thus enables generalizations from
becomes much bigger than each of its
the sample to the larger population
components.
Non-Sampling Error - typically been seen as too
Source: Classification of Surveys
difficult to estimate, and it has been assumed
that their effect on the results would be Classification Criteria
minimized if samples were big enough and
1. Who: The Population
properly representative.
2. What: The Topic
 Non-sampling errors, on the other 3. By Whom: Survey Agency and Sponsor
hand, tend to be more complex, and 4. How: Survey Mode
they require researchers’ detailed 5. When: Cross-Sections and Panels
attention, as they may creep into each 6. Where: Regional, National, Cross-
and every stage of the data collection National and International Surveys
 Non-sampling errors can come from a
Who: The Population
multitude of sources—it is safe to say
they can comprise about 95% of the  Target Population – finite set of the
TSE. elements (usually persons) that will be
 Response Error - response error can studied in a survey.
stem from, among other things, social  Frame Population - set of persons for
desirability, visibility, the degree of whom some enumeration can be made
sensitivity of a specific item, the order prior to the selection of the survey
sample, i.e. who can be listed in the  National Survey - When all inhabitants
sampling frame. of a country belong to the target
 Survey Population - set of people who, population
if they have been selected for the  International Survey - sample from
survey, could be respondents multiple countries and the target
 Unit Non-Response - failure to collect population is the combined population
data from units belonging to the frame of the countries under study.
population and selected to be in a  Cross-National Survey - an independent
sample. sample is drawn in each participating
 Response Rate – percentage of selected country, and the results of the national
units who participate in the survey data files are combined afterwards into
a harmonized cross-national data file.
What: The Topic
 Input Harmonization - the instrument is
 Omnibus Survey - data on a wide as identical as possible in each
variety of subjects is collected during participating country: the same
the same interview fieldwork approach, the same survey
 Objective Questions - home turf of mode, the same questions (but
official statistics and cover issues like translated), and so forth.
labour situation, education, living  Output Harmonization - allows
conditions, health, etc. countries to use their preferred survey
 Subjective Questions - collect mode.
information on values, attitudes, and  Ex-Post Output Harmonization -
the like different questions can be used, or that
some countries can derive variables
When: Cross Sections and Panels from questionnaires and others from
 Cross Sections – data are collected only registers, as long as the same
once. definitions are used.
 Repeated Cross Sections - survey is  Ex-Ante Output Harmonization – the
repeated at regular intervals questionnaire has to be identical in
 Longitudinal Panel – the same group of each country, but that the data
respondents is approached at regular collection method may differ.
time intervals. Source: Questionnaire Design and Surveys
 Rotating Panel - New panel members Sampling
participate in a fixed number of waves.
A group of new panel members is Statistical Inference - take a random sample
recruited for each wave, making it from a population and then to use the
possible to draw conclusions on information from the sample to make
individual changes and on the inferences about particular population
population at a given moment. characteristics such as the mean (measure of
central tendency), the standard deviation
Where: Regional, National, Cross-National and (measure of spread) or the proportion of units
International Surveys in the population that have a certain
 Regional Survey - A survey among the characteristic.
inhabitants of a specific community
Sampling Distribution - used to describe the  Descriptive Research - merely
distribution of outcomes that one would give the distribution of
observe from replication of a particular responses of people on some
sampling plan. specific questions such as
satisfaction with the economy,
Standard Deviation - number that indicates
government, and functioning of
how much, on average, each of the values in the
the democracy.
distribution deviates from the mean (or center)
 Explanatory Research - to
of the distribution.
determine the reasons for the
Variance - the average squared deviations satisfaction with the
about the mean government or the popularity
of a politician (experimental or
Confidence Interval - used to express the
non-experimental)
uncertainty in a quantity being estimated

Sources of Errors 2. Choice of the Most Important


Variables
 The use of an inadequate frame.  In the case of a descriptive
 A poorly designed questionnaire. study, it is directly determined
 Recording and measurement errors. by the purpose of the study.
 Non-response problems.  In the case of an explanatory
Sampling Techniques study, it makes sense to
develop an inventory of
1. Random Sampling - Random sampling possible causes and to develop
of size n from a population size N from that list a preliminary
model that indicates the
2. Stratified Sampling - can be used relationships between the
whenever the population can be variables of interest.
partitioned into smaller sub-
populations, each of, which is 3. Choice of a Data Collection Method
homogeneous according to the  Modes of data collection differ
particular characteristic of interest. in their cost of data collection,
where personal interviewing is
3. Cross-Sectional Sampling - Cross- the most expensive, telephone
Sectional Study the observation of a interviewing is less expensive,
defined population at a single point in and mail interviewing is the
time or time interval. cheapest.

4. Quota Sampling - availability sampling, 4. Choice of Operationalization


but with the constraint that
proportionality by strata be preserved Operationalization - translation of the
concepts to the questions
Source: Designing a Survey

1. Choice of a Topic 5. Test of the Quality of the


Questionnaire
 Check on face validity Source: A Step-by-Step Guide to Developing
 Control of the routing in the Effective Questionnaires and Survey
questionnaire Procedures for Program Evaluation & Research
 Prediction of quality of the
Survey Errors
questions with some
instrument 1. Sampling Error (How representative is
 Use of a pilot study to test the the group being surveyed?)
questionnaire 2. Frame Error (How accurate is the list
from which respondents are drawn?)
6. Formulation of the Final Questionnaire 3. Selection Error (Does everyone have an
 After corrections in the equal chance of being selected to
questionnaire have been made, respond?)
the ideal scenario would be to 4. Measurement Error (Is the
test the new version again. questionnaire valid and reliable?)
5. Non-response Error (How is the
7. Choice of Population and Sample generalizability of findings jeopardized
Design because of subjects who did not reply?)
 Sampling - procedure to select
Steps
a limited number of units from
a population in order to 1. Determine the purpose
describe this population.
 Sampling Frame - a list of 2. Decide what you are measuring
names and addresses of
potential respondents 3. Who should be asked?
 “sampling error” is reduced by
8. Decide About the Fieldwork using a large, random sample or
 Number of interviews for each conducting a census;
interviewer  “frame error” is minimized by
 Number of interviewers making sure the list of potential
 Recruitment of interviewers: subjects is current and
where, when, and how accurate;
 How much to pay: per hour/per  “selection error” is avoided by
interview eliminating duplication from
 Instruction: kind of contacts, these lists.
number of contacts, when to
stop, and administration 4. Consider the audience
 Control procedures: interviews  Age
done/not done  Education level
 Registration of incoming forms  Familiarity with tests &
 Coding of forms questionnaires
 Necessary staff  Cultural bias/language barrier

5. Choose an appropriate data collection


method
 Mailed  Use exact numbers when
 Telephone possible (instead of Frequently,
 Personal (face-to-face) Rarely)
interview  Explain the “rule” being used
 Web-based with clear instructions and to
apply the rule consistently
6. Choose a collection procedure throughout the questionnaire.
 Confidential - Name or other  Define time frames if necessary.
identifiers are used to follow-up  If you are using a continuum
with nonrespondents or match scale with numbers to
data from pre-test/posttests. represent concepts, make sure
 Anonymous - Name is not to “anchor” at least the top and
asked of respondents. bottom of the scale with terms
that describe meanings of the
7. Choose measurement scale and scoring numbers. (For example, 1 =
 Fixed-Response Low, 10 = High).
 Open-Ended (Narrative  Balance the “negative” or “low”
Response) answer choices (both in number
and degree) with “positive” or
8. Title the questionnaire “high” choices on the scale.
 Include a brief purpose of the  An even number of answer
study (one sentence or phrase) choices doesn’t give the
 Consider including a simple respondent an easy, “middle”
graphic that depicts the choice. If you want to offer a
purpose of the evaluation or “neutral” or “no opinion”
program choice, then do it by design, not
by accident.
9. Start with non-threatening questions
15. Ask only one question at a time
10. Include simple instructions
16. Avoid loaded questions
11. Use plain language
17. Arrange in a logical order
12. Be brief
18. Minimize open-ended questions
13. Put the most important questions first
19. Provide space to tell more
14. Make sure questions match the
measurement scale selected, and
Source: Writing Good Questions
answer categories are precise
 Make sure answer choices Reliability - refers to the consistency in
correspond to the questions, responses across different respondents in the
both in content and grammar. same situations
Validity - refers to the extent that the measure 1. Open-Ended Questions
we are using accurately reflects the concept we  respondent’s answer could
are interested in obviously cover many different
areas, including some the
Key Elements of Good Questions
researchers had not previously
1. Specificity - Does the information considered.
targeted by the question match the  particularly well-suited to
target of the needed information? exploring a topic or to gathering
2. Clarity - The core vocabulary of the information in an area that is
survey question should be attuned to not well known.
the level of understanding of the
participants. 2. Close-Ended Questions
 format of closed-ended
3. Brevity - questions should be stated in questions is more defined, and
as straightforward and uncomplicated a has been standardized to a
manner as possible, using simple words greater extent than the open-
rather than specialized ones and using ended style
as few words as possible to pose the  allow researchers to provide
question greater uniformity to the
responses and to easily
Common Question Pitfalls
determine the consensus on
1. Double-barrel Questions - These are certain items, but only on those
created when two different topics are items that were specified by the
specified in the question, essentially answers provided.
asking the respondent two questions in
Source: TIPS FOR DEVELOPING SURVEY
one sentence
INSTRUMENTS/QUESTIONNAIRES

2. Loaded/Leading Questions - These Instrument Types


originate when question wording
 Pre-Test/Post-Test - used when you
directs a respondent to a particular
would like to observe whether a desired
answer or position.
change occurred as a result of your
efforts.
3. Questions with Built-In Assumptions -
Some questions contain assumptions
 Focus Groups - small, deliberately
that must first be considered either true
chosen groups of people that are
or false in order to answer the second
interviewed together to observe
element of the question
reactions to the topics you propose.
4. Double-Negative Questions - Questions
 Surveys - sets of standardized questions
that include two negatives not only
(questionnaire) that are administered to
confuse the respondent, but they may
selected individuals or groups of
also create a level of frustration
individuals.
resulting in nonresponse.
Question Types
Question Types
 Close-Ended Questions - those that list  Will we be able to ask everyone
pre-set answers for respondents to answer the question? If not,
how will we choose?
 Open-Ended Questions - those that list
pre-set answers for respondents 4. Data Collection Mode
 Face-to-face? Through email?
 Scales - social science research The internet? Phone?
technique used to measure the
qualitative aspects of the group of 5. Data Collection Period
people you need information from.  After the exchange ends? After
a workshop session is
 Likert Scales - type of scale that asks completed?
respondents to indicate the level that
they agree or disagree (generally, from 6. Survey Questions
‘strongly agree’ to ‘strongly disagree’)  What question(s) will your
about a statement. survey ask to answer the
question?
Measurement Concepts

 Validity - refers to whether or not your 7. Survey Testing


survey (or other form of measure) is  What information did pre-
actually measuring what it is supposed testing the survey reveal? Do
to we need to clarify wording?
Were any questions not
 Reliability - when your survey (or answerable?
survey question) produces consistent
Survey Instructions
results when used to measure the same
thing over and over.  Who will use the information
 How the information will be used
 Bias - unfair preferences or dislike of  Whether the responses will be
something. anonymous
 The approximate time the survey will
Survey Design Matrix
take to complete
1. Indicator data/question to be  Assure respondents of anonymity and
answered confidentiality (if applicable)
 This will most likely come from
Rating Scales
the indicator you are reporting
against or perhaps a question 1. Numeric rating scales - where
your team or the client wants to respondents are asked to rate a topic
know. based on a set of numbers;

2. Information Source 2. Graphic rating scales - which look at


 Who can answer the question? behaviors or performance (i.e.,
leadership, teamwork, performance);
3. Sampling
3. Descriptive graphic rating scale - where
respondents are asked to place a mark
along a line that depicts one extreme to
the other.

You might also like