Download as pdf or txt
Download as pdf or txt
You are on page 1of 18

Modern Marketing Research Concepts

Methods and Cases 2nd Edition Feinberg


Solutions Manual
Full download at link:

Solution Manual: https://testbankpack.com/p/solution-manual-for-


modern-marketing-research-concepts-methods-and-cases-2nd-
edition-feinberg-kinnear-taylor-1133188966-9781133188964/

Test Bank: https://testbankpack.com/p/test-bank-for-modern-


marketing-research-concepts-methods-and-cases-2nd-edition-
feinberg-kinnear-taylor-1133188966-9781133188964/
CHAPTER 6
DESIGNING SURVEYS AND DATA COLLECTION INSTRUMENTS

CHAPTER OUTLINE

Function and Importance of Questionnaires pg 264

A questionnaire is a formalized method for collecting data from respondents by measuring past
purchase and usage behavior, attitudes and opinions, intentions, awareness and knowledge,
ownership, or a variety of respondent characteristics.

Measurement error is a serious problem in questionnaire construction; when a preference


question is asked without posing realistic alternatives or trade-offs, results can be misleading.

Questionnaire Components pg 266

A questionnaire typically comprises five sections:


1. identification data (such as the respondent’s name, address, or phone number)
2. request for cooperation
3. instructions
4. information sought
5. classification data (characteristics of the respondent, primarily “geodemographic” data)
The information sought forms the major, and invariably the longest, portion of the questionnaire.

© 2013 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly
accessible website, in whole or in part.
Questionnaire Design pg 267

Best practices in questionnaire design:


• review preliminary considerations
• decide question content
• decide response format
• decide question wording
• decide question sequence
• decide physical characteristics
• pre-test and revise
• make final draft

Preliminary Considerations: Getting Started


• decisions already made = basis for questionnaire decisions
o detailed listing of information needs
o type of research design
o sources of data
o clear definition of the target population (characteristics of respondents)
o detailed sampling plan
o measurement scales and communication media specified
o nature of the research findings visualized
• link between information needs and data to be collected
o each question should relate to a specific information need

Decide on Question Content: What, Exactly, Do We Need to Know?


Ways respondent might fail to answer a question accurately or at all:
• respondent unable to provide the data
• respondent uninformed
• respondent forgetful
o unaided recall
o aided recall
o recognition
• respondent misremembers
• willingness to respond accurately
o counterbiasing statement
o indirect statement—refer to “other people”
o labeled response categories
o randomized response technique

Decide on Response Format


1. open-ended questions—free-response, usually in respondent’s own words

Advantages:
• allow general attitudes to be expressed, which can aid in interpreting later, more
structured questions

© 2013 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly
accessible website, in whole or in part.
• establish rapport and gain the respondent’s cooperation
• exert minimal influence on subsequent responses, because respondents are not biased by
a predetermined set of response alternatives and can freely express views divergent from
the researcher’s expectations
• can provide the researcher with insights, side comments, and explanations that are useful
in developing a “feel” for the research findings
• quotations from open-ended questions (called “verbatims”) add a sense of realism and
life to the more structured research findings
• useful for exploratory research purposes

Disadvantages:
• high potential for interviewer bias
• time and cost associated with coding the responses
• implicit extra weight given to respondents who are more articulate
• higher effort and time commitment required of respondents

2. multiple-response questions—require the respondent to choose an answer from an


explicit, codified list

Advantages:
• reduce interviewer bias (in interpreting verbal responses)
• reduce effort respondents must put into replying
• reduce cost and time associated with data processing
• easy and quick to administer
• can limit responses to the set of interest

Disadvantages:
• design of questions requires considerable time and cost

Issues in multiple-response question design


• number of alternatives—collectively exhaustive, usually mutually exclusive
• the alternatives included
• position bias—bias toward the central position of a number range, toward first idea on a
list
o alternate order of alternatives
o split design, where half of respondents see one scale, and the other half a different
one

3. dichotomous questions—allows the respondent a choice of only two responses: “yes or


no,” “did or did not,” “agree or disagree,” and often a third neutral alternative

Advantages:
• all the advantages of multiple-response questions
• quick and easy to administer
• respondents understand

© 2013 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly
accessible website, in whole or in part.
• little chance of interviewer bias
• responses are easy to code, process, analyze and report
• binary responses can be analyzed using powerful statistical methods

Disadvantages:
• may miss many grades of feeling
• can lead to substantial measurement error
• especially susceptible to error resulting from how they are worded—positive or negative
posture of the question
Main issue in dichotomous question design: whether to include a neutral response alternative

Coding Verbal Data from Open-Ended Questions (feature on pg 275)


• Step 1: Two people independently read through all verbal responses
• Step 2: Each reader forms a list of key words useful for categorization
• Step 3: Together with third person who has not taken part thus far, readers merge their
lists
• Step 4: A fourth and fifth person sort the responses into the given categories
• Step 5: Together with a sixth person, sorters agree to final categorization scheme

Decide on Question Wording


Respondents will construct answers to the questions the interviewer chooses to ask, in the
formats allowed by the questions:
• use simple language
• use unambiguous words
o does the word truly convey what the researchers intended?
o can respondents extrapolate any alternative meaning?
o if so, does context help make the intended meaning clear?
o is there any word with similar pronunciation or spelling that could be confused
with it?
o could we use a simpler word or phrase instead?
• avoid leading questions
• avoid biasing questions
• avoid implicit alternatives
• avoid implicit assumptions
• avoid estimates
• avoid double-barreled questions
• consider frame of reference

Decide on Question Sequence


• sequence questions to retain respondent interest without introducing bias
• use an intriguing, readily understood opening question
• ask general questions first
• place uninteresting and difficult questions late in the sequence
• arrange questions in logical order

© 2013 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly
accessible website, in whole or in part.
Decide on Physical Characteristics
• quality of the paper and printing
• professional formatting and graphics
• name of the organization sponsoring the survey and the project name on the first page

Carry Out Pre-testing and Revision, and Make the Final Draft
Before the questionnaire is ready for field operations, it needs to be pre-tested and revised.

Computer-Aided Questionnaire Design pg 290

Special design programs for questionnaires provide predefined question formats for many kinds
of attitude scales, paired comparisons, and demographics. They allow the designer to specify
question switching and skip patterns based on previous answers, to randomize the order of
presentation of brand names or other questions, to reverse positive and negative scale directions,
and to custom-tailor standard question formats.

Observational Forms pg 290

In designing observational forms, the types of observations to be made and how they are to be
measured must be made explicit. The design should flow logically from the listing of
information needs, which must clearly specify the aspects of behavior that are to be observed:
• who is to be observed?
• what is to be observed?
• when is observation to be made?
• where should observations be made?

International Questionnaire Design pg 290

There are complex issues involved in the construction of a questionnaire, given a common
language and culture; the difficulty is increased substantially when the questionnaire is to be
used across national boundaries, languages and cultures.
• translate language or regional variation
• back-translate and test translation
• give special attention to answer categories (e.g., compensate for dislike of saying “no” or
preference for lower categories on a scale)
• if necessary, ask questions different ways
• if necessary, change format to match a different interviewing mode
• more pre- and post-processing time (typically several months)
• higher costs due to translation, coding, training of interviewers, securing facilities, and
back-translation (typically 2-5 times)

Self-Reports in Marketing Research: How the Questions Shape the Answers (Special Expert
Feature on pg 291)

© 2013 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly
accessible website, in whole or in part.
This feature by Norbert Schwarz and Andy Peytchev covers problems encountered in question
design, including these issues:
• question comprehension
Questions that seem clear enough will still seem vague in the context of a questionnaire.
Any answer is more likely to be chosen when offered among response alternatives than
volunteered in an open-ended question on the topic, because the response alternatives
clarify what the question is asking. Meaningful answers require inferences about what the
questioner wants to know.
• frequency scales
Inquiries about frequency of specific target behaviors can be problematic, because labels
such as “sometimes” or “frequently” can mean very different frequencies to different
respondents. Absolute or objective scales, however, can introduce problems of inference
(e.g., does every expenditure count as “shopping” or only major purchases of groceries?).
The range of frequency options offered affect the inference made and the consequent
answers. Self-reports from questions with different reference frequencies therefore cannot
be directly compared.
• numerical rating scales
A scale numbered from 0 to 10 is not equivalent to a scale numbered -5 to +5. The
negative numbers of a bipolar (e.g., -5 to +5) scale bring about a different inference from
the respondents than the first half of the numbers on a unipolar (e.g., 0 to 10) scale and
therefore must be treated by researchers as having different meanings. Standard pre-tests
will not catch such interpretive problems, because respondents will not have any
difficulties using the scales, nor will there be discrepancies over time affecting reliability
measures. Researchers need to use “cognitive” pretests, where respondents elaborate on
what they are thinking as they digest the question, to catch unintended contextual
influences on question interpretation.
• behavioral reports
“Autobiographical memory” is rather poor. For example, one study found that 3 percent
did not recall overnight hospitalization within ten weeks, and 42 percent forgot within
one year. These problems are compounded for frequent behaviors and experiences, which
are not always represented as single episodes but can blur into generic memories of daily
or weekly activities.
• estimation strategies
When respondents cannot recall specific instances, they resort to a variety of estimation
strategies that are highly dependent on context. For example, respondents generally
assume that the typical frequency is represented by values in the middle range of the
scale and that the extremes of the scale correspond to the extremes of the distribution.
Based on these assumptions, they use the scale as a frame of reference to estimate their
own behavioral frequency. These effects also vary in intensity across demographic and
cultural groups and by content of the memory. Researchers can’t presume that that the
same scale has the same meaning across cultures, even with faultless translation.
• subsequent judgments
Earlier questions can affect judgments during later questions. For example, merely by
viewing a scale chosen for them by researchers, respondents will make inferences not
only about the performance of a service provider, but also about their own experience of
the service and their feelings about that.

© 2013 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly
accessible website, in whole or in part.
• attitude reports
When asked a question about an attitude, respondents often retrieve some information
about the attitude object and form a judgment on the spot. They also rarely retrieve all
information that may be relevant but truncate the search process as soon as “enough”
information has come to mind to form a judgment. Survey answers depend on what
comes to mind at the time the judgment is formed, which may be the information that
was brought to mind by preceding questions. Such question order effects undermine
generalizations from the sample, which was exposed to the preceding questions, to the
population, which was not.

Researchers need to always keep in mind that respondents’ thought processes are critically
shaped by the research instruments they design. They have only recently started to realize that
much of what they assess are evaluative judgments, constructed “on the spot,” based on what
happens to spring to mind when respondents are asked. These judgments are highly context
dependent and easily swayed by the wording and ordering of questions and response alternatives
as well as the layout and properties of scales. The very purpose of statistics is to generalize from
samples to populations, yet these context effects can undermine such generalizations and cannot
be corrected by later statistical analyses. There is no substitute for thinking deeply about the
process of answering questions and working closely with respondents in pilot studies to weed out
problems before large-scale data collection takes place.

© 2013 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly
accessible website, in whole or in part.
KEY TERMS pg 295

aided recall
A test of ad effectiveness that relies on ‘‘prompting’’ or cueing (i.e., providing some form of
relevant information) when requesting that respondents recall any ad messages they remember
being exposed to (seeing or hearing) during a stated time period.

binary
A variable having only two possible states, for example, 1 or 0, yes or no, male or female,
domestic or foreign, etc.

collectively exhaustive
When the response choices for a question include all possible options—for example, a listing of
all the days of the week.

counterbiasing statement
A statement made by an interviewer suggesting that the behavior in question is normal or natural,
to offset a respondent’s reluctance to answer honestly about a sensitive topic.

dichotomous question
A question in which respondents are asked to choose one of two possible responses.

double-barreled question
A question that requires the respondent to supply two separate bits of information and that
therefore has the potential to create conflict or confusion.

frame of reference
The viewpoint of a respondent invoked by the orientation of a question.

free-response question
A question design that allows respondents to provide answers freely, in their own words, instead
of choosing among various options defined by the interviewer. Same as open-ended question.

modal answer
The most common of a set of responses.

multiple-response (multiple-choice) question


A question type requiring the respondent to choose from a fixed list of pre-established answers.

mutually exclusive
When a variable is such that a respondent cannot assign two values simultaneously—for
example, age categories (because one cannot be two ages at once).

open-ended question
A question design that allows respondents to provide answers freely, in their own words, instead
of choosing among various options defined by the interviewer. Same as free-response question.

© 2013 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly
accessible website, in whole or in part.
position bias
When a respondent’s answer to a question is affected by the order of the question in the survey
or the order of a set of presented answers.

pre-testing
Running a smaller experiment run before the main one, so as to quickly and inexpensively
perform critical measurements that may affect how the main test is conducted.

primacy bias
A survey bias that results when respondents have better memory of or preference for the
response options listed earliest.

recency bias
A survey bias that results from a respondent having better memory of, or greater preference for,
the response options listed most recently.

recognition
A method of stimulating a respondent’s recall of an event (often, seeing an advertisement) that
involves direct reference to that event, such as showing the respondent the actual ad, or some
part thereof, and asking if the respondent has seen it before.

sequence bias
When responses to a questionnaire are affected by the order in which questions or choices
appear; often addressed by counterbalancing.

unaided recall
A test of ad effectiveness that does not allow for any form of ‘‘prompting’’ or cueing (i.e., no
relevant information is provided that might help jog the memory) when requesting that
respondents recall any ad messages they remember being exposed to (seeing or hearing) during a
stated time period.

© 2013 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly
accessible website, in whole or in part.
DISCUSSION QUESTIONS

1 What decisions precede the questionnaire design stage? What criteria govern the inclusion of
questions in the questionnaire?

Prior decisions include:


1. Research design
2. Data sources
3. Selection of target population
4. Sampling plan
5. Communication method
6. Mode of data analysis (visualization of research findings)

The criterion governing inclusion on the questionnaire is that each question should relate to a
specific information need. Otherwise, the questionnaire will be lengthier than necessary, making
it less efficient in terms of time and costs.

2 How does the respondent affect the content of the questions? Can respondents influence the
forms of questions as well, or the range and wording of choices? In which ways?

The definition of the target population affects the design of the questions from the start, so that
questions and answer categories cover the range of views within that target population. The
design of structured questions is catered to participant input in exploratory research and testing in
order to ensure that the explicit categories cover the vast majority of participant choices. If the
range or wording of choices is found to bias responses, they must be re-scaled or reworded to
eliminate or reduce that bias.

3 How can a researcher overcome the problems associated with collecting data about events
that are unimportant to respondents or that occur infrequently?

The researcher may either choose to select respondents most likely to remember (e.g., recent
purchasers) or may stimulate memory with the aided recall.

Instructor’s Further Probe: What problems may be associated with asking recent purchasers a
survey question?

Suggested Probe Response: A potential problem with seeking recent purchasers of a product or
service is that these people have not seen the product or service through its entire life. In
addition, persuasive techniques are often used by merchants to reassure a buyer, especially of
large purchases such as cars. Such biasing may not yield a full spectrum of opinions.

© 2013 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly
accessible website, in whole or in part.
4 What approaches are available for dealing with the bias resulting from a respondent’s
unwillingness to respond accurately? Are there some biases that cannot be realistically
overcome?

To compensate for a respondent’s unwillingness to respond accurately, researchers may use:


1. counterbiasing statements
2. indirect statements
3. labeled response categories
4. randomized response technique
If respondents are unwilling to respond accurately, not all biases may be able to be overcome.

Instructor’s Further Probe: Develop a counterbiasing statement for a question that seeks to find
out how many times a respondent has had severe (lasting more than 4 days) hemorrhoids.

Suggested Probe Response: A typical counterbiasing statement would be something to the effect
of: “Many people suffer from severe cases of hemorrhoids, a well-known ailment, due to various
changes in diets. How many times have you experienced a severe (lasting more than 4 days) case
of hemorrhoids in the past year?” The question acknowledges the respondent is not alone and
that it is not uncommon for people to get a severe case of hemorrhoids. In this manner the
respondent will feel more comfortable answering truthfully.

5 What are the advantages and disadvantages of open-ended questions relative to multiple-
response questions? How might they be used synergistically so that each helps overcome
weaknesses in the other?

Open ended questions allow general attitudes to be expressed, aiding interpretation of more
structured questions; establish rapport and gain the respondent’s cooperation; are less likely to
bias respondents’ answers, and can provide the researcher with insights, side comments, and
explanations. Their disadvantages are the potential for interviewer bias, the time and cost of
coding the responses, the implicit extra weight given to respondents who are more articulate, and
the higher effort and time commitment required of respondents.

More open-ended questions might be used in the exploratory research or questionnaire design
phase, as part of designing answer categories for more structured questions. A couple open-
ended questions might also be included in the final questionnaire as a “catch-all” for responses
that did not fit the structured questions.

6 What does a researcher need to consider when designing multiple-response questions? What
general guidelines should be utilized in determining the wording of a question? the range
and content of responses? the order of questions and of responses?

When designing multiple response questions, researchers must consider the number of
alternatives to include, which alternatives to include, and position bias. General guidelines for
responses to be:

© 2013 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly
accessible website, in whole or in part.
• collectively exhaustive
• mutually exclusive
• varied in sequence to reduce position bias

In determining the wording of a question, every effort must be made to eliminate bias implicit in
the question.

Questionnaire designers should try to use simple language and unambiguous words and to avoid
leading questions, biasing questions, implicit alternatives or assumptions, estimates, and double-
barreled questions; they should also consider frame of reference of the question.

In choosing the range and content of responses, designers should aim to have the explicit
alternatives listed, as well as the range of the scale(s) used in numerical scales, cover the
selections of approximately 95% of respondents. To compensate for primacy and recency bias,
the order of the questions and responses can be rotated, while maintaining a logical order to the
questions and keeping general questions before more specific questions. A split design, where
half of respondents see one scale, and the other half a different one, can indicate whether the
scale itself is biasing respondents.

Instructor’s Further Probe: What are the general guidelines for designing survey questions
seeking to accomplish?

Suggested Probe Response: All the guidelines seek to reduce bias from the way the questions are
presented.

7 Under what conditions would dichotomous questions be inappropriate? What sorts of


questions would be indicated instead?

A dichotomous question is inappropriate when a large percentage of the respondents sincerely


hold a neutral view on the question or hold an opinion that does not fit the dichotomous framing.
If there are more than two grades of response present in the respondent group or indecision
predominates, dichotomous questions may yield results which contain substantial measurement
error. In such situations, a question with a 5-point scale or a wider range of answer categories
would be more appropriate.

8 MINICASE: Suppose you are researching attitudes about “smartphones” (those that have
web-browsing and other advanced features like GPS, music and video playback, and an
interactive screen) among your classmates. Specify the information needs for this project,
and then develop a concise questionnaire to measure the target group’s perceived needs,
attitudes, and purchase intentions. Be sure to consider that knowledge, purchase readiness,
and opinions themselves are all likely to be heterogeneous, even in this self-selected group
(in fact, what is considered a smartphone may be heterogeneous as well). How might you use
the resulting data set to determine which peripherals are most likely to be needed, valued,
and purchased?

© 2013 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly
accessible website, in whole or in part.
Students responses should analyze the research objective (to determine which smartphone
peripherals are most likely to be needed, valued and purchased) and from that determine specific
information needs. Questions should be designed to meet those information needs; each question
should correspond to one of the information needs. Attention should be given to avoiding bias in
the structure of the questions and to providing answer categories and scales appropriate to the
question and the information needed.

© 2013 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly
accessible website, in whole or in part.
REVIEW QUESTIONS

1 Which of the following survey questions is most likely to achieve meaningful research
results?
a Do you support using $15 per taxpayer to fund the campaigns of political candidates?
b How long ago did you purchase your refrigerator?
c Have you purchased canned vegetables from the supermarket in the last month?
d When buying ice cream, how do you weigh the caloric content and healthfulness relative
to the experiential quality of the flavor?
Ans: c
Rationale: Option a does not provide trade-off of valued options—for example, that tax revenues
allocated to political campaigns reduce funds usable elsewhere. Option b presumes that the
respondent not only knows the answer, but was in fact the one who did the purchasing. Option d
uses ambiguous words and overly complex language and presumes that the respondent even
makes this sort of trade-off. Option c has an objective, yes/no answer, based on fully specified
behavior, and provides a time referent.

2 Which of the following is an improvement on this question: “What type of cat food do you
prepare for your cat?” (assuming the respondent has a cat)
a What brand of cat food do you prepare for your cat?
b What do you feed your cat?
c What does your cat like to eat?
d none of the above
Ans: b
Rationale: The original question uses the word “prepare,” which may be interpreted differently
by different people (e.g., “cook,” “purchase,” “serve,” “get ready”); option a shares this wording
problem. Option c assumes that the respondent knows what the cat likes. Option b refers to the
behavior of the respondent and so is more appropriate.

3 “Consumer Reports has ranked this dishwasher as the best in its class. How would you rate it
relative to other brands?” The flaw in this question is:
a it is a biasing question.
b it assumes too much knowledge on the part of the respondent.
c it is ambiguous.
d all of the above
Ans: d
Rationale: Referring to the authority of Consumer Reports biases the question. The question
assumes a broad experience and knowledge of dishwashers and available brands. How, or
according to what criteria, to rate the dishwasher has not been specified (e.g., performance,
features, price, quality, durability).

4 Which of the following corrects one of the main problems in the question “Do you favor
warning labels on cosmetic products?”
a Have you ever noticed and been disturbed by warning labels on cosmetic products?
b Do you favor warning labels on cosmetics if it means that animals do not need to be
tested?

© 2013 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly
accessible website, in whole or in part.
c Do you agree with People for the Ethical Treatment of Animals (PETA) that cosmetic
products should have warning labels?
d Are warning labels on cosmetic products a good idea?
Ans: b
Rationale: A problem with both the original and option d is an implicit assumption; they also
share a lack of trade-offs or rationales. (What are we being warned about?) Option b makes this
explicit, conveying what the warning labels will accomplish. Option a is a double-barreled
question (including incendiary, leading language), and option c is a biasing question, as PETA is
an authoritative, highly regarded organization.

5 Which of the following is a well-posed survey question?


a How much do you spend for groceries per year?
b Was the service you experienced at the store friendly and efficient?
c How would you rate the service at this store on friendliness and efficiency (on a scale of
1 to 10)?
d How much did you spend for groceries on your last visit to the grocery store?
Ans: d
Rationale: Option a requires an estimate on the part of the respondent, adding up a large number
of separate purchase occasions over a long period. Options b and c are double-barreled questions.
Option d refers to a specific instance, and has a single, correct answer.

6 Which of the following two questions is a better, and why?


“Are the operating hours of your local library adequate to your community’s needs, in your
opinion?”
“Are you, personally, content with your local library’s hours of operation?”
a The first one, because it is more specific to information needed for research.
b The second one, because it gives information on the respondent’s attitude.
c Both are fine, for different purposes.
d Neither is good, because they assume too much knowledge on the part of the respondent.
Ans: c
Rationale: The difference between the two questions is the frame of reference. The first would
likely be more appropriate to gauge perceptions of social or community policy, whereas the
second would gauge an individual's personal views.

7 Put the following questions in the best order for a survey:


w Please fill in: male/female ethnic group __________ yearly income __________
x Would you consider a capital investment in your farm, if it resulted in greater profits?
y Would you consider converting your egg farm to a factory for chicken pies, if it resulted
in greater profits?
z Are you frustrated by lower profits than you are capable of achieving?
a x, y, w, z
b z, x, y, w
c z, w, y, x
d w, z, x, y

© 2013 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly
accessible website, in whole or in part.
Ans: b
Rationale: Question z builds rapport, is designed to interest the respondent, and is almost
rhetorical in nature. Question x is more general than y, so it should come before it. Question w is
standard demographic information, and so should be placed last.

8 In which of the following ways would initial questions be most likely to bias the sample (i.e.,
those who agree to provide information)?
a if questions are too uninteresting to the respondent
b if questions are partisan or highly specific
c if questions are too simplistic or obvious
d if responses are not mutually exclusive
Ans: a
Rationale: Option b could bias the results of later questions, but not necessarily the sample.
Option c would not bias the sample, unless the level of simplicity was so extreme as to be
offensive. Option d would typically have no effect on sample selection at all. Option a is the best
choice; if the initial questions are too uninteresting, respondents may simply refuse to answer,
believing that the survey is boring or fails to pertain to them. This is particularly so for phone
and mail surveys.

9 Which of the following is commonly the largest source of preventable (by the researcher)
error in carrying out marketing research?
a poor sample design
b measurement error
c non-response error
d sample bias
Ans: b
Rationale: While poor sample design, sample bias, and non-response can each lead to error, far
greater errors can come about when a question does not measure the attitude it was designed to
assess. Note as well that non-response errors and some elements of both sample bias and poor
design are all difficult to prevent, since they are all contingent on the sample itself, while
measurement error deals only with the question(s) being asked.

10 What problem may be encountered with this question: “Should clients be given a buy-in to
incentivize their participation and prevent a run?”
a “Buy-in” is not defined.
b Alternatives are not explicit.
c When you take out the jargon, the question is nearly meaningless.
d all of the above
Ans: d
Rationale: “Buy in” is not common terminology and, even among those who understand it, it can
refer to many different schemes. Similarly, no specific alternatives are presented to the
respondent, so the question is hopelessly general. Finally, the question is filled with jargon and
can be phrased much more clearly for the general population.

© 2013 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly
accessible website, in whole or in part.
TOPICS FOR FURTHER DISCUSSION

A. A group of land developers, who operate in a suburban area of about 2,000 square miles
near a large urban center, created a plan to build a new diesel train route to relieve traffic on the
one freeway leading to the city. The traffic problem had been steadily worsening for decades, to
the point that average commute times were well over two hours per day. The plan served
developer interests in the creation of new housing and business centers near proposed train
stations; this helped counter the growing “no-growth” trend sparked by growing concern over
“suburban sprawl.” The developers were able to recruit environmentalists to their cause by
pointing to the number of cars that would no longer be on the roads. A group of concerned public
transit users has protested the train plan, claiming that it only serves commuters, will harm
existing bus services, and will therefore result in more cars on the road over the course of the day
as non-commute public transit users are forced to use cars.

Tell the students to suppose they are on a research team hired by a group of local governments to
evaluate the claims made by the developers and their environmentalist allies regarding the
expected effects of the proposed train. The main claims the clients want investigated are that the
train route will not harm existing public transit services and that it will result in a net decrease of
single-driver car use on the freeway. The two target population groups are existing public transit
users and regular commuters.
• Have the students translate the two claims under investigation into a specific set of
information needs. Display these information needs for the class as the students call them
out.
• For each information need, ask the students address the issues of response format,
question wording, and possible response categories to meet that need.
• The remainder of the question-crafting can be organized as successive drafts of the set of
questions, similar to the evolution of a single question shown in Marketing Research
Focus 6.4 Metamorphosis of a Question (pg 284).
• For the final phase, have the class consider issues of question sequencing and rotation
and decide on the overall logical order of questions and, if necessary, what questions will
be used in a rotating order.

B. Have the students give an example of a seemingly clear and simple question which, when
proposed for a general questionnaire that seeks to survey a variety of demographics, becomes too
ambiguous. Allow the class to brainstorm for a while before choosing one for the rest of the
process.
• Display the chosen question for the class and allow the class to propose edits to the
“simple question” that will make the meaning even more clear and unambiguous to a
greater proportion of the general population.
• Allow students to point out further ambiguities present in the edited version of the
question.
• Have students suggest a set of response categories that could be suitable if the question
was to be used in a multiple response format. If necessary, edit the question again to do
this.
• Ask students to elaborate on the reasons for their choices and discuss.

© 2013 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly
accessible website, in whole or in part.
FURTHER READING

Bradburn, Norman M., Seymour Sudman, and Brian Wansink. Asking Questions: A Practical
Guide to Questionnaire Design. San Francisco, CA: Jossey-Bass, 2004.

Couper, Mick P., Michael W. Traugott, and Mark J. Lamias. “Web Survey Design and
Administration,” Public Opinion Quarterly 65 (2001): 230–53.

Creswell, John W. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches,
3rd ed. Thousand Oaks, CA: Sage Publications, 2008.

Dillman, Don A., Jolene D. Smyth, and Leah Melani Christian. Internet, Mail, and Mixed-Mode
Surveys: The Tailored Design Method, 3rd ed. Hoboken, NJ: John Wiley & Sons, 2009.

Fowler, Floyd J. Survey Research Methods, 4th ed. Thousand Oaks, CA: Sage Publications,
2008.

Oppenheim, A. N. Questionnaire Design, Interviewing, and Attitude Measurement. London,


United Kingdom: Pinter Publishers, 2000.

Reynolds, Nina, Adamantios Diamantopoulos, and Bodo Schlegelmilch. “Pre-testing in


Questionnaire Design: A Review of the Literature and Suggestions for Further Research,”
Journal of the Market Research Society 35 (1993): 171–82.

© 2013 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly
accessible website, in whole or in part.

You might also like