Business Research Methods TUT

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 239

Business 

Research Methods 

Amare Abawa, PhD
E‐mail: amare.abawa@aau.edu.et
CHAPTER ONE
INTRODUTION TO BUSINESS RESEARCH
Meaning of Research

• Research (Re-search) is the creation of new


knowledge and/or the use of existing knowledge in
a new and creative way to generate new concepts,
methodologies and understandings.
• The organized systematic data-based scientific
inquiry, or investigation into a specific problem
undertaken with the purpose of finding answers or
solutions to it (Srivastava and Rego, 2011)
Introduction………………

Definition of Business Research
Business research is defined as the
systematic and objective process of
generating information for aid in
making business decisions(Zikmund, 2011)
Introduction………………
• Walliman (2011) argues that everyday uses of the
term ‘research’ are not research in the true
meaning of the word.
The term is used wrongly:
• Just collecting facts or information with no clear
purpose;
• Reassembling and reordering facts or information
without interpretation;
• As an activity with no or little relevance to everyday
life;
• As a term to get your product or idea noticed and
respected.
Introduction………………
Characteristics of Research
• The purpose, to find out things, is stated clearly.
• The data are collected systematically.
• The data are interpreted systematically.
• Therefore, we can define research as a
process that is undertaken in a systematic
way with a clear purpose, to find things out.

• ‘Systematic way’ suggests that research is


based on logical relationships and not just
beliefs ( Ghauri and Grønhaug 2010 ).
Introduction………………

• Research information is neither intuitive nor


haphazardly gathered.
• Business research must be objective
• Detached and impersonal rather than biased
• It facilitates the managerial decision process for all
aspects of a business.
• The analysis and interpretation of empirical evidence
(facts from observation or experimentation) to confirm
or disprove prior conceptions.
Introduction………………

Factors in Conducting research
• Why? to equip yourself with the information you
need to make informed business decisions about
• Start-up,
• Innovation,
• Growth etc.
• Time constraints
• Availability of data
• Nature of the decision
• Benefits versus costs
Determining Factors
Availability of Data Benefits
Time Constraints Nature of the Decision vs. Costs
Is the infor- Does the value
Is sufficient time Is the decision Conducting
Yes mation already Yes Yes of the research Yes
available before of considerable
a managerial
on hand
strategic
information Business
inadequate exceed the cost
decision
for making
or tactical
of conducting Research
must be made? importance?
the decision? research?

No No No No

Do Not Conduct Business Research


Potential Value of a Business Research Effort Should
Exceed Its Estimated Costs
Costs
Value •Research
•Decreased expenditures
uncertainty •Delay of business
•Increased likelihood decision and
of a correct decision possible disclosure
•Improved business of information to
performance and rivals
resulting higher •Possible erroneous
profits research results
Major Topics for Research in Business
• Research topic identification is the job of the researcher
• If you are not clear about what you are going to research,
it will be difficult to plan how you are going to research it.

Possible Areas
• General Business Conditions and Corporate Research
• Financial and Accounting Research
• Management and Organizational Behavior Research
• Sales and Marketing Research
• Information Systems Research
• Corporate Responsibility Research
• Etc. 
CHAPTER TWO
Theory and the Business
Research Process
Theories
• A theory is a proposed relationship between two or more
concepts.
• Theories are formulated to explain, predict, and
understand phenomena and, in many cases, to challenge
and extend existing knowledge within the limits of critical
bounding assumptions.

• A formal, logical explanation of some events that includes


predictions of how things relate to one another
• The theoretical framework is the structure that can hold
or support a theory of a research study.
• The theoretical framework introduces and describes the
theory that explains why the research problem under
study exists.,
• While it may seem that theory is only relevant to
academic or basic business research, theory plays a
role in understanding practical research as well.
• Before setting research objectives, the researcher
must be able to describe the business situation in
some coherent way.
• Without this type of explanation, the researcher
would have little idea of where to start.
• Ultimately, the logical explanation helps the
researcher know what variables need to be included
in the study and how they may relate to one another.
The role of theory in business research
• As said earlier, theory has a number of roles for
basic/applied research types. Some of these roles are;
1. Points areas that are most likely to be fruitful,
that is, areas in which meaningful relationships
among variables are likely to be found.
• If the variables come to be selected such that
no relationships between them obtain, the
research will be fruitless no matter how
meticulous the subsequent observations and
inferences.
2. Increase the meaningfulness of the findings of
a particular study by helping us to perceive
them as special cases of the operation of a set
of more general or abstract statements of
relationships rather than as isolated bits of
empirical information.
3. Links the specific empirical findings to a
more general concept has another major
advantage.
• It affords a more secure ground for prediction than
do these empirical findings by themselves.
• The theory by providing a rationale behind the
empirical findings introduces a ground for
prediction which is more secure” than mere
extrapolation from previously observed trends
4. Helps to reformulate findings based on theory
• Whereas an empirical finding does not afford a
basis for drawing diverse inferences about what
will follow, its reformulation or revamping in
theoretic terms affords a secure basis for
arriving at the inferences
• Theory mediates between specific empirical
generalization or uniformities and broad
theoretical orientations anchored in the
intellectual tradition
5. Theory attests to the truth of empirical
findings
• A hypothesis is as much confirmed by fitting it
into a theory as by fitting it into facts, because
it then enjoys the support provided by
evidence for all the other hypotheses of the
given theory
6. Theory helps us to identify gaps in our
knowledge and seek to bridge them up with
intuitive, impressionistic or extensional
generalizations
• “It is only when using methodologically classified
sciences that we know what we know and what
we do not know.”
• This way, theory constitutes a crucially important
guide to designing of fruitful research.
• A theory is a set of propositions that provide an
explanation by means of a deductive or inductive
system.
• Assume a person wants to know if organizational
structure influences leadership style.
• The individual want to gain a better
understanding of the environment and be able to
predict behavior( to be able to say that if we take
a particular course of action we can expect a
specific outcome to occur.
• Thus, the three major functions of theory are
description, explanation and prediction.
Criteria to evaluate The
• To be endorsed as a “good” theory it must go through
seven tests. Multiple experts will evaluate the theory.

• The seven criteria for theory evaluation are;


1. scope,
2. Logical consistency,
3. Parsimony/few concepts
4. Utility,
5. Testability,
6. Heurism - amount of research and new thinking
stimulated by the theory
7. Test of time.
• Scope‐ refers to the breadth of communication
behaviors covered in the theory.
• what are the boundaries of the theory's
explanations?

• Logical Consistency‐ refers to the internal logic


in the theoretical statements
• Do the claims of the theory match its assumptions?
• Do the principles of the theory contradict each
other?
• Parsimony‐ refers to the simplicity of the
explanation provided by the theory.
• Is the theory as simple as it can be to explain the
phenomenon under consideration?
• Utility‐ refers to the theory's usefulness or practical 
value
• Testability ‐ refers to our ability to test the accuracy of 
a theory's claim
• Can the theory be shown to be false?

• Heurism - refers to the amount of research and new


thinking stimulated by the theory
• Has the theory been used in research extensively
to stimulate new ways of thinking about
communication?
• Test of time‐ refers to the theory's durability over 
time
• How long has the theory been used in communication 
research?
Elements of a theory
• Theories are sets of interrelated concepts and
ideas that have been scientifically tested and
combined to clarify, and expand our
understanding of people, their behaviors, and
their societies.
• Theory is constructed with basic elements
(building blocks):
• Concepts/constructs
• Variables/ measured
• Statements/Propositions
Concepts
Theory development is essentially a process of
describing phenomena at increasingly higher levels of
abstraction.
In other words, as business researchers, we need to be
able to think of things in a very abstract manner, but
eventually link these abstract concepts to observable
reality.
To understand theory and the business research
process, it will be useful to know different terminology
and how these terms relate
Research Concepts and Constructs 
• A concept or construct is a generalized idea about a class of
objects, attributes, occurrences, or processes that has been given
a name.
• If you, as an organizational theorist, were to describe
phenomena such as supervisory behavior or risk aversion, you
would categorize empirical events or real things into concepts.
• Concepts are the building blocks of theory.
• In organizational theory, leadership, productivity, and morale are
concepts.
• In the theory of finance, gross national product, risk aversion, and
inflation are frequently used concepts.
• Accounting concepts include assets, liabilities, and depreciation.
• In marketing, customer satisfaction, market share, and loyalty are
important concepts.
ladder of abstraction 
1. Abstract level ‐ In theory development, the level
of knowledge expressing a concept that exists
only as an idea or a quality apart from an object
2. Empirical level ‐ Level of knowledge that is
verifiable by experience or observation
3. Latent construct ‐ concept that is not directly
observable or measurable, but can be estimated
through proxy measures. Eg.
Research Propositions and Hypotheses 
• As we mentioned, concepts are the basic units of theory
development.
• However, theories require an understanding of the
relationship among concepts.
• Thus, once the concepts of interest have been identified, a
researcher is interested in the relationship among these
concepts. E.g:
• Ha: Behavioral Finance has a significant negative effect on the
performance of MSMEs
• propositions ‐ Statements explaining the logical linkage
among certain concepts by asserting a universal connection
between concepts. Arguments
• hypothesis ‐ Formal statement of an unproven proposition
that is empirically testable – educated guess
• A theory may be developed at conceptual and abstract 
level with deductive reasoning by going from a general 
statement to a specific assertion 
• Deductive Reasoning is the logical process of deriving a
conclusion about a specific instance based on a known
general premise or something known to be true. E.g: we
know that all business managers are human beings; Mrs.
Atsede is a manager of XYZ Co. Mrs. Atsede is a human
being.
• At the empirical level, a theory may be developed with
inductive reasoning.
• Inductive reasoning is the logical process of establishing a
general proposition on the basis of observation of
particular facts. E.g. All business Managers that have ever
been seen are human beings; therefore, all business
Managers are human beings.
• Over the course of time, theory construction is often
the result of a combination of deductive and inductive
reasoning.
• Our experiences lead us to draw conclusions that we
then try to verify empirically by using the scientific
method.
The Scientific Method 

Assess Formulate Statement


Design
relevant concepts & of
research
existing Propositions Hypotheses
knowledge

Acquire Analyze & Provide


empirical evaluate explanation-
data data state new
problem
‘Six’ Phases of Research
1. Problem definition – the foundation of the research
2. Literature review
3. Selection of research design
4. Data gathering
5. Data processing and analysis
6. Implications, Conclusions, and Recommendations
1. Problem Definition
• Describe broader context (background)

• State the objectives or purposes

• Inform reader about the scope of the study,


including defining any terms, limitations, or
restrictions
• Reduces potential criticisms

• State the hypothesis (es)


2. Literature Review
• Gives theoretical rationale of problem being studied,
what research has been done and how it relates to the
problem

• Helpful to divide the literature into sub‐topics for ease


of reading
• Quality of literature should be assessed
• Be sure to include well respected ‘individuals’ in the
research area (if they exist)
• Constitutes theoretical review, empirical review and
conceptual framework
3. Selection of Research Design
• The research design indicates the steps that will need
to be take and the sequence they will occur

• Each design can rely on one or more data collection


technique

• Assess reliability and validity

• Critical consideration in determining methodology is


the selection of subjects
• The design could be exploratory, descriptive or
explanatory
Data Gathering
• Must pretest

• Design the sampling scheme

• Questionnaires must be coded


Data processing and analysis
• Describe demographics of the data

• Choose appropriate statistical technique (Based on the


design)

• Look for patterns in data (to check irregularities)


Interpreting the Results
• Make sure to consider the audience for use of
words and important variables

• Discuss implications for the population of interest


and future research
Operational Definitions
• Variables first defined by conceptual definitions that
explain the concept the variable is trying to capture

• Variables then defined by operational definitions which


are definitions for how variable will be measured
Language of Sampling

• Population: entire collection of people/things

• Parameter: # that results from measuring all units in population

• Sampling frame: specific data from which sample is drawn

• Unit of analysis: type of object of interest

• Sample: a subset of some of the units in the population

• Statistic: # that results from measuring all units in the sample


Independent and Dependent Variables

• independent variable is what  • dependent variable is what is 
is manipulated affected by the independent 
variable
• a treatment or program or 
cause • effects or outcomes

• ‘Factor’ • ‘Measure’
Research Design and Methodology

• In general, a research design is like a blueprint for the


research.
• Research design refers to the overall strategy utilized to
carry out research
• Research Methodology concerns how the design is
implemented, how the research is carried out.
A few designs
• Cross‐Sectional Design
• Longitudinal Design
• Time Series Design
• Panel Design

•Exploratory
•Descriptive 
•explanatory
Cross‐Sectional Design

• A cross‐sectional design is used for research that


collects data on relevant variables one time only
from a variety of people, subjects, or
phenomena.
• Cross‐sectional designs generally use survey
techniques to gather data, for example, the
Ethiopian Census supposed to be conducted
every 10 years.
• Advantages: data on many variables, data from a large
number of subjects, data from dispersed subjects, data
on attitudes and behaviors, good for exploratory
research, generates hypotheses for future research, data
useful to many different researchers

• Disadvantages: increased chances of error, increased


cost with more subjects and each location, cannot
measure change, cannot establish cause and effect, no
control of independent variable, difficult to rule out rival
hypotheses, static (Prone to common method bias)
Longitudinal Designs
• A longitudinal design collects data over long periods of
time.
• Measurements are taken on each variable over two or
more distinct time periods.
• This allows the researcher to measure change in
variables over time.
Time Series Design
• A Time Series Design collects data on the same
variable at regular intervals in the form of aggregate
measures of a population.

• Time series designs are useful for:


• Establishing a baseline measure
• Describing changes over time
• Keeping track of trends
• Forecasting future (short term) trends
• Advantages: easy to collect data, easy to present in graphs,
easy to interpret, can forecast short term trends

• Disadvantages: data collection method may change over


time, difficult to show more than one variable at a
time, needs qualitative research to explain fluctuations,
assumes present trends will continue unchanged
Panel Designs
• Panel Designs collect repeated measurements from the same people or
subjects over time.
• Panel data is a subset of longitudinal data where observations are
for the same subjects each time

• Panel studies reveal changes at the individual level.

• Advantages: reveals individual level changes, establishes time order of variables,


can show how relationships emerge

• Disadvantages: difficult to obtain initial sample of subjects, difficult to keep the


same subjects over time, repeated measures may influence subjects behavior
Business Research Types

Basic research

Applied research
Basic Research
• Attempts to expand the limits of knowledge.
• Not directly involved in the solution to a pragmatic 
problem.
• Needs strong theoretical gap identification
Basic Research Example
• Is executive success correlated with high need for
achievement?
• Are members of highly cohesive work groups more
satisfied than members of less cohesive work groups?
• Do consumers experience cognitive dissonance in low‐
involvement situations?
Applied Research
• Conducted when a decision must be made about a 
specific real‐life problem
Applied Research Examples
• Should a company advertise intensively while it  the 
only company in the industry?
Ethical Issues
• Societal norms
• Codes of behavior
Rights and Obligations of the Respondent

• The obligation to be truthful
• The right to maintain Privacy
• The obligation not to deceive
• The right to be informed
Rights and Obligations of the Researcher

• The purpose of research is research
• Objectivity
• Not misrepresenting research
• Protect the right to confidentiality of both subjects 
and clients
• No dissemination of faulty conclusions
Major Parts of a Proposal 
Revision
CHAPTER THREE

Defining the Research Problem and Reviewing


the Literature
Research Problem

A research problem is an educational


issue or concern that an investigator
presents and justifies in a research study.
Why is the Research Problem 
Important?
• Establishes importance of topic
• Creates reader interest
• Focuses reader’s attention on how study will add 
to literature
Where is the Research Problem 
Located?
• Look in the opening paragraphs, and ask
yourself:
– What was the issue or problem that the researcher
wanted to address?
– What is the concern being addressed “behind” this
study?
– Why was the study undertaken in the first place?
– Why is this study important?
How Does It Differ from Other 
Parts of Research?
• A research problem is an educational issue or problem
in the study
• A research topic is the broad subject matter being
addressed in a study.
• A purpose is the major intent or objective of the
study.
• Research questions are questions the researcher
would like answered or addressed in the study.
Differences Among Topic, Problem, Purpose and 
Questions

General Topic Distance Learning

Research Lack of students in distance


Problem classes

Purpose To study why students do not


attend distance education classes at
Statement
the college.

Research Does the use of web site technology


in the classroom deter students
Question
Specific from enrolling in a distance
education class?
Can and Should the Problem Be 
Researched?
• Can you study the problem?
– Do you have access to the research site?
– Do you have the time, resources and skills to 
carry out the research?
• Should you study the problem?
– Does it advance knowledge?
– Does it contribute to practice?
How Does the Research Problem Differ for Quantitative and 
Qualitative Research?

Use quantitative if your 
Use qualitative if your research 
research problem requires 
problem requires you to
you to
Learn about the views of the 
• Measure Variables people you plan to study
• Assess the impact of these 
Assess a process over time
variables on an outcome
Generate theories based on 
• Test theories or broad 
participant perspectives
explanations
Obtain detailed information about  
• Apply results to a large 
a few people or research sites.
number of people
Five Elements of a “Problem 
Statement”
FLOW OF IDEAS
What
Remedying
Evidence for Deficiencies
Educational the Deficiencies
Topic the in the
Issue will do for
Issue Evidence
Select
Subject •A Concern •Evidence from •In this body of Audiences
Area •A Problem the literature evidence, what is
•Something •Evidence from missing? How will
that needs a practical •What do we addressing
solution experiences need to know what we need to
more about? know help:
– researchers
– educators
– policy makers
– individuals like
those in the study
How Do We Write the “Statement of the 
Problem” Section?

• One paragraph for each of the five elements
• Heavily reference this section to the literature
• Provide statistics to support trends
• Use quotes from participants 
Writing a 
Literature Review
• A literature review
 surveys scholarly articles, books and other
sources
(e.g. dissertations, conference proceedings)

Provides a short description and critical


evaluation of work critical to the topic.
Offers an overview of significant literature
published on a topic.
General Guidelines to
Writing a Literature Review
• Introduce the literature review by pointing
out the major research topic that will be
discussed
• Identify the broad problem area but don’t
be too global
• Discuss the general importance of your
topic for those in your field
General Guidelines to
Writing a Literature Review
• Don’t attempt to cover everything written
on your topic
• You will need to pick out the research most
relevant to the topic you are studying
• You will use the studies in your literature
review as “evidence” that your research
question is an important one
General Guidelines to
Writing a Literature Review
• It is important to cover research relevant to all the
variables being studied.
• Research that explains the relationship between
these variables is a top priority.
• You will need to plan how you will structure your
literature review and write from this plan.
Organizing Your Literature Review
• Topical Order—organize by main topics or issues;
emphasize the relationship of the issues to the main
“problem”
• Chronological Order—organize the literature by the
dates the research was published
• Problem‐Cause‐Solution Order—Organize the review so
that it moves from the problem to the solution
Organizing Your Literature Review
• General‐to‐Specific Order—(Also called the funnel
approach) Examine broad‐based research first and then
focus on specific studies that relate to the topic
• Specific‐to‐General Order—Try to make discuss specific
research studies so conclusions can be drawn
• After reviewing the literature, summarize what has
been done, what has not been done, and what needs
to be done
• Remember you are arguing your point of why your
study is important!
• Then pose a formal research question or state a
hypothesis—be sure this is clearly linked to your
literature review
• All sources cited in the literature review should be
listed in the references
• To sum, a literature review should include
introduction, summary and critique of journal
articles, justifications for your research project
and the hypothesis for your research project
Common Errors Made in Lit Reviews
• Review isn’t logically organized
• Review isn’t focused on most important facets of the 
study
• Review doesn’t relate literature to the study
• Too few references or outdated references cited
• Review isn’t written in author’s own words
• Review reads like a series of disjointed summaries
• Review doesn’t argue a point
• Recent references are omitted
What is a reference or citation?
• A way of giving credit for someone's thinking, writing or
research
• You mark the material when you use it (a citation) and
give the full identification at the end (a reference)
• In academic writing you are obliged to attribute every
piece of material you use to its author
Why cite or reference?
• Credit sources of information & ideas

• Reader can locate for further information if


required
• Validate arguments
• Increase and spread knowledge
• Show depth, breadth & quality of your reading!
When to cite?
Plagiarism includes
1. Using another writer’s words without proper citation
2. Using another writer’s ideas without proper citation
3. Citing a source but reproducing the exact word
without quotation marks
4. Borrowing the structure of another author’s
phrases/sentences without giving the source
5. Borrowing all or part of another student’s paper
6. Using paper‐writing service or having
a friend write the paper
CHAPTER FOUR AND FIVE
SAMPLING AND DATA COLLECTION METHODS
SAMPLING
• Sampling is a technique of selecting individual members
or a subset of the population to make statistical
inferences from them and estimate characteristics of the
whole population.
• Different sampling methods are widely used by
researchers so that they do not need to research the
entire population to collect actionable insights
Why sample?
• The population of interest is usually too large to
attempt to survey all of its members.
• A carefully chosen sample can be used to represent
the population.
• The sample reflects the characteristics of the population
from which it is drawn.
• It is also a time‐convenient and a cost‐effective
method
• A census study occurs if the entire population is very small or
it is reasonable to include the entire population (for other
reasons).
Sampling Techniques
Probability versus Nonprobability
• Probability Samples: each member of the population
has a known non‐zero probability of being selected

• Methods include Simple Random sampling,


systematic sampling, and stratified sampling.

• Nonprobability Samples: members are selected from


the population in some nonrandom manner

• Methods include convenience sampling, judgment


sampling, quota sampling, and snowball sampling
Random Sampling
Random sampling is the purest form of probability
sampling.

• Each member of the population has an equal and known chance of


being selected.

• When there are very large populations, it is often ‘difficult’ to


identify every member of the population, so the pool of available
subjects becomes biased.

• You can use software, such as minitab and excel to generate random
numbers or to draw directly from the columns
Systematic Sampling
• Systematic sampling is often used instead of random sampling.
• It is also called an Nth name selection technique.

• After the required sample size has been calculated, every Nth
record is selected from a list of population members.
• As long as the list does not contain any hidden order, this
sampling method is as good as the random sampling method.

• Its only advantage over the random sampling technique is


simplicity (and possibly cost effectiveness). Fifty thousand
Stratified Sampling
• Stratified sampling is commonly used probability method that is
superior to random sampling because it reduces sampling error.

• A stratum is a subset of the population that share at least one


common characteristic; such as males and females.

• Identify relevant stratums and their actual representation in the


population.

• Random sampling is then used to select a sufficient number of


subjects from each stratum.

• Stratified sampling is often used when one or more of the stratums in


the population have a low incidence relative to the other stratums.
Cluster Sampling
• Cluster Sample: a probability sample in which each sampling 
unit is a collection of elements.

• Effective under the following conditions:
• A good sampling frame is not available or costly, while a frame listing 
clusters is easily obtained
• The cost of obtaining observations increases as the distance separating 
the elements increases

• Examples of clusters:
• City blocks – political or geographical
• Housing units – college students
• Hospitals – illnesses
• Automobile – set of four tires
Convenience Sampling
• Convenience sampling is used in exploratory research where
the researcher is interested in getting an inexpensive
approximation.

• The sample is selected because they are convenient.

• It is a nonprobability method.
• Often used during preliminary research efforts to get an estimate
without incurring the cost or time required to select a random sample
Judgment Sampling
• Judgment sampling is a common nonprobability method.

• The sample is selected based upon judgment.


• an extension of convenience sampling

• When using this method, the researcher must be confident


that the chosen sample is truly representative of the entire
population.
Quota Sampling
• Quota sampling is the nonprobability equivalent of stratified
sampling.

• First identify the stratums and their proportions as they are


represented in the population

• Then convenience or judgment sampling is used to select


the required number of subjects from each stratum.
Snowball Sampling
• Snowball sampling is a special nonprobability method used
when the desired sample characteristic is rare.

• It may be extremely difficult or cost prohibitive to locate


respondents in these situations.

• This technique relies on referrals from initial subjects to


generate additional subjects.

• It lowers search costs; however, it introduces bias because


the technique itself reduces the likelihood that the sample
will represent a good cross section from the population.
Sample Size?
• The more heterogeneous a population is, the larger the 
sample needs to be.

• Depends on topic – frequently it occurs?

• For probability sampling, the larger the sample size, the 
better.

• With nonprobability samples, not generalizable regardless –
still consider stability of results
Sample Size Formula
2

 zs  Where: n =

n 
sample size, Z =
Confidence level,

E s= standard
deviation E= error

If the population is large and known,


n = N/ (1+N*e2)
Response Rates
• About 20 – 30% usually return a questionnaire

• Follow up techniques could bring it up to about 50%

• Still, response rates under 60 – 70% challenge the integrity of 
the random sample

• How the survey is distributed can affect the quality of 
sampling
Errors (Sampling and Non sampling)
• Sampling error is one which occurs due to
unrepresentativeness of the sample selected for
observation.
• Occurs when the sample selected does not contain the
true characteristics, qualities or figures of the whole
population
• Non‐sampling error is an error arise from human error,
such as error in problem identification, method or
procedure used, etc.
• It arise due to a number of reasons, i.e. error in problem
definition, questionnaire design, approach, coverage,
information provided by respondents, data preparation,
collection, tabulation, and analysis
Errors could be Committed by:
Interviewers
• Interviewers have a direct and dramatic effect on the way 
a person responds to a question.

• Most people tend to side with the view apparently favored by 
the interviewer, especially if they are neutral.

• Friendly interviewers are more successful.

• In general, interviewers of the same gender, racial, and ethnic 
groups as those being interviewed are slightly more successful.
Respondents
• Respondents differ greatly in motivation to answer correctly 
and in ability to do so.

• Obtaining an honest response to sensitive questions is 
difficult.

• Basic errors
• Recall bias: simply does not remember
• Prestige bias: exaggerates to ‘look’ better
• Intentional deception: lying
• Incorrect measurement: does not understand the units or definition
DATA COLLECTION METHODS
Data
• Data is a collection of facts, figures, objects, symbols, and
events gathered from different sources.
• Organizations collect data to make better decisions.
• Without data, it would be difficult for organizations to
make appropriate decisions
• In case data (related to pdt) is not collected beforehand,
the organization’s newly launched product may lead to
failure for many reasons, such as less demand and
inability to meet customer needs.
Primary Data Collection Methods
• Primary data is collected from the first‐hand
experience and is not used in the past.
• The data gathered by primary data collection methods
are specific to the research’s motive and highly
accurate.
• Primary data can be divided into two categories:
quantitative and qualitative
Quantitative Methods:
 Statistical methods are highly reliable as the element
of subjectivity is minimum in these methods
Time Series Analysis ‐ The term time series refers to a
sequential order of values of a variable, known as a
trend, at equal time intervals.
• Using patterns, an organization can predict the demand
for its products and services for the projected time
Qualitative Methods
• Useful in situations when historical data is not available.
• Or there is no need of numbers or mathematical
calculations.
• Qualitative research is closely associated with words,
sounds, feeling, emotions, colors, and other elements
that are non‐quantifiable.
• Quantitative methods do not provide the motive behind
participants’ responses, often don’t reach
underrepresented populations, and span long periods to
collect the data.
• Hence, it is best to combine quantitative methods with
qualitative methods.
Surveys
• Surveys are used to collect data from the target audience
and gather insights into their preferences, opinions,
choices, and feedback related to their products and
services.
• You can also use a ready‐made survey template to save
on time and effort.
• Survey can be distributed through several distribution
channels such as email, website, social media, etc.
• Depending on the type and source of your audience, you
can select the channel.
Polls
• A poll is a way of knowing people’s choices and
understanding what works for them
• Polls comprise of one single or multiple choice
question.
• When it is required to have a quick pulse of the
audience’s sentiments, you can go for polls.
• Similar to surveys, polls, can be embedded into various
platforms.
Interviews
• In this method, the interviewer asks questions either
face‐to‐face or through telephone to the
respondents.
• In face‐to‐face interviews, the interviewer asks a
series of questions to the interviewee in person and
notes down responses.
• This form of data collection is suitable when there are
only a few respondents.
• It is too time‐consuming and tedious to repeat the
same process if there are many participants.
Delphi Technique
• In this method, experts are provided with the estimates
and assumptions of forecasts made by other experts in
the industry.
• Experts may reconsider and revise their estimates and
assumptions based on the information provided by
other experts.
• The consensus of all experts constitutes the final.
Focus Groups
• In a focus group, a small group of people, around 8‐10
members, discuss the common areas of the problem.
• Each individual provides his/her insights on the issue
concerned.
• A moderator regulates the discussion among the
group members.
• At the end of the discussion, the group reaches a
consensus.
Questionnaire
• A questionnaire is a printed set of questions, either
open‐ended or closed‐ended.
• The respondents are required to answer based on their
knowledge and experience with the issue concerned.
• The questionnaire is a part of the survey, whereas the
questionnaire’s end‐goal may or may not be a survey.
Secondary Data Collection Methods
• Secondary data is the data that has been used in the
past.
• The researcher can obtain data from the sources,
both internal and external, to the organization
• The secondary data collection methods, too, can
involve both quantitative and qualitative techniques.
• Secondary data is easily available and hence, less
time‐consuming and less expensive as compared to
the primary data.
• However, with the secondary data collection
methods, the authenticity of the data gathered
cannot be verified.
REVIEW
CHAPTER SIX
Measurement Concept
Measurement and Scaling Concepts

Measurement is the process of observing and recording


the observations that are collected as part of a research
effort. There are two major issues that will be considered
here;
1. Understanding the fundamental ideas involved in
measuring
2. Understanding the different types of measures that
you might use in social research
What Do I Measure?
• The decision statement, corresponding research
questions, and research hypotheses can be used to
decide what concepts need to be measured in a given
project
• Measurement is the process of describing some
property of a phenomenon of interest, usually by
assigning numbers in a reliable and valid way.
• The numbers convey information about the property
being measured.
• When numbers are used, the researcher must have a
rule for assigning a number to an observation in a
way that provides an accurate description.
• However, errors can be committed in measurement system. 
For example, consider the following students grade;

Consider two students who have percentage scores of 79.4 and 70.0, respectively. The most likely outcome when
these scores are translated into “letter grades” is that each receives a C (the common 10-point spread would yield a
70–80 percent range for a C). Consider a third student who finishes with a 69.0 percent average and a fourth
student who finishes with a 79.9 percent average.
Which students are happiest with this arrangement? The first two students receive the same grade, even though
their scores are 9.4 percent apart. The third student gets a grade lower (D) performance than the second student,
even though their scores are only 1 percent different. The fourth student, who has a score only 0.5 percent higher
than the first student, would receive a B. Thus, the measuring system (final grade) suggests that the fourth student
outperformed the first (assuming that 79.9 is rounded up to 80) student (B versus C), but the first student did not
outperform the second (each gets a C), even though the first and second students have the greatest difference in
percentage scores.
• Thus, a strong case can be made that error exists in
this measurement system.
• All measurement, particularly in the social sciences,
contains error.
• Researchers, if you are to represent concepts
truthfully, make sure that the measures used, if not
perfect, are accurate enough to yield correct
conclusions.
• Ultimately, research and measurement are tied closely
together
Concept
• A concept can be thought of as a generalized idea that
represents something of meaning.
• Concepts such as age, sex, education, and number of
children are relatively concrete properties.
• Other concepts are more abstract.
• Concepts such as loyalty, personality, channel power, trust,
corporate culture, customer satisfaction, value, and so on are
more difficult to both define and measure.
• E.g. Loyalty could be measured as a combination of customer
share (relative proportion of purchase) and commitment
(acceptable sacrifice to do business with you)
• A researcher has to know what to measure before
knowing how to measure something
Operational Definitions 
• Researchers measure concepts through a process
known as operationalization
• Operationalization is the process of identifying scales
that correspond to variance in a concept that will be
involved in a research process
• This process involves identifying scales that correspond
to variance in the concept
• A scale is a device providing a range of values that
correspond to different values in a concept being
measured
• For example, you may use to check your weight, provide
a range of values that correspond to different values in
the concept being measured.
Variables 
• A variable is an element, feature, or factor that is liable to
vary or change.
• For example, consider the following hypothesis;
• H1: Experience is positively related to job performance
• The hypothesis implies a relationship between two
variables, experience and job performance and they
capture variance in the experience and performance
concepts
• The scale used to measure experience is quite
straightforward in this case and would involve simply
providing the number of years an employee has been with
the company
• Job performance, on the other hand, can be quite complex
that could be measured by multiple variables
Constructs 
• A construct is a term used for concepts that are
measured with multiple variables.
• For instance, if you wishes to measure the customer
orientation of a salesperson, you could use several
variables like:
• I offer the product that is best suited to a customer’s problem
• A good employee has to have the customer’s best interests in
mind
• I try to find out what kind of products will be most helpful to
a customer
• Each of the variables are captured in a scale 1 to 5
• Thus, Constructs can be very helpful in operationalizing
a concept
Levels of Scale Measurement 
• Though there are different scales, all scales may not
have the same richness in a measure and all concepts
may not require a rich measure as well
• The four levels or types of scale measurement are
nominal, ordinal, interval, and ratio level scales.
• Each type offers the researcher progressively more
power in analyzing and testing the validity of a scale
Nominal Scale
• Represent the most elementary level of measurement.
• A nominal scale assigns a value to an object for
identification or classification purposes only
• A nominal scale is truly a qualitative scale.
• Nominal scales are extremely useful, and are sometimes
the only appropriate measure, even though they can be
considered elementary
• Nominal scaling is arbitrary. For example, you can assign
1 to designate male, and 0 to designate female. You can
also use any number to designate both gender
categories
Ordinal Scale
• Ordinal scales allow things to be arranged in order
based on how much of some concept they possess.
• In other words, an ordinal scale is a ranking scale
• we may use the term rank order to describe an ordinal
scale
• Ordinal scales are somewhat arbitrary, but not nearly
as arbitrary as a nominal scale because they tell us the
order/rank‐which come first etc.
Interval Scale
• Interval scales have both nominal and ordinal
properties, but they also capture information about
differences in quantities of a concept.
• For example, if a professor assigns grades to mid exam
using a numbering system ranging from 1.0–20.0, not
only does the scale represent the fact that a student
with a 16.0 outperformed a student with 12.0, but the
scale would show by how much (4.0).
Ratio Scale
• Ratio scales represent the highest form of
measurement in that they have all the properties of
interval scales with the additional attribute of
representing absolute quantities
• Interval scales possess only relative meaning,
whereas ratio scales represent absolute meaning.
In other words, ratio scales provide iconic
measurement
Mathematical and Statistical Analysis of Scales

A ratio scale has all the properties of nominal, ordinal, and interval scales. However, the same
cannot be said in reverse
• While it is true that mathematical operations can be
performed with numbers from nominal scales, the
result doesn’t have a great deal of meaning. For
example, a professor cant judge the quality of students
by the average ID number?
• Thus, although you can put numbers into formulas and
perform calculations with almost any numbers, the
researcher has to know the meaning behind the
numbers before meaningful conclusions can be drawn
Discrete Measures
• Discrete measures are those that take on only one of a finite
number of values.
• A discrete scale is most often used to represent a classification
variable.
• Therefore, discrete scales do not represent intensity of
measures, only membership.
• Common discrete scales include any yes‐or‐no response,
matching, color choices, or practically any scale that involves
selecting from among a small number of categories.
• Thus, when someone is asked to choose from the following
responses disagree, neutral, agree, the result is a discrete
value that can be coded 1, 2, or 3, respectively. This is also an
ordinal scale to the extent that it represents an ordered
arrangement of agreement.
• Nominal and ordinal scales are discrete measures
Continuous Measures
• Continuous measures are those assigning values anywhere along
some scale range in a place that corresponds to the intensity of
some concept.
• Strictly speaking, interval scales are not necessarily continuous.
Consider the following common type of survey question:

• This is a discrete scale because only the values 1, 2, 3, 4, or 5 can


be assigned. Furthermore, it is an ordinal scale because it only
orders based on agreement. We really have no way of knowing
that the difference in agreement of somebody marking a 5 instead
of a 4 is the same as the difference in agreement of somebody
marking a 2 instead of a 1. Therefore, the mean is not an
appropriate way of stating central tendency and, technically, we
really shouldn’t use many common statistics on these responses.
• However, as a scaled response of this type takes on
more values, the error introduced by assuming that the
differences between the discrete points are equal
becomes smaller.
• This may be seen by imagining a Likert scale (the
traditional business research agreement scale) with a
thousand levels of agreement rather than three.
• The differences between the different levels become so
small with a thousand levels that only tiny errors could
be introduced by assuming each interval is the same.
• Therefore, business researchers generally treat
interval scales containing five or more categories of
response as interval.
• When fewer than five categories are used, this
assumption is inappropriate.
Index Measures
• An index assigns a value based on how much of the concept
being measured is associated with an observation. Indexes often
are formed by putting several variables together.
• Likewise, a consumer’s attitude toward some product is usually a
function of multiple attributes.
• An attribute is therefore, a single characteristic or fundamental
feature of an object, person, situation, or issue.
• Multi‐item instruments for measuring a construct are called
index measures, or composite measures.
• An index measure assigns a value based on how much of the
concept being measured is associated with an observation.
Indexes often are formed by putting several variables together
• For example, a social class index might be based on three weighted
variables: occupation, education, and area of residence
• Composite measures also assign a value based on a
mathematical derivation of multiple variables.
• For example, salesperson satisfaction may be measured by
combining questions such as,
• How satisfied are you with your job?
• How satisfied are you with your territory?
• How satisfied are you with the opportunity your job offers?”
• For most practical applications, composite measures and
indexes are computed in the same way.
Computing Scale Values
• Summed scale is a scale created by simply summing
(adding together) the response to each item making up
the composite measure.
• Sometimes, a response may need to be reverse‐coded
before computing a summated or averaged scale value.
Reverse coding means that the value assigned for a
response is treated oppositely from the other items.
Three Criteria for Good Measurement 
• The three major criteria for evaluating measurements are
reliability, validity, and sensitivity
• Reliability is an indicator of a measure’s internal consistency.
• Consistency is the key to understanding reliability.
• A measure is reliable when different attempts at measuring
something converge on the same result.
• For example, think of a scale to measure weight. You would expect
this scale to be consistent from one time to the next.
• If you stepped on the scale and it read 70 kg., then got off and back on, you
would expect it to again read 70 kg.
• If it read 60 kg. the second time, the scale would not be reliable
• Coefficient alpha is the most commonly applied estimate
of a multiple‐item scale’s reliability
• Generally speaking, scales with a coefficient α between
0.80 and 0.95 are considered to have very good reliability.
• Scales with a coefficient α between 0.70 and 0.80 are
considered to have good reliability, and an α value
between 0.60 and 0.70 indicates fair reliability.
• When the coefficient α is below 0.6, the scale has poor
reliability.
• Most statistical software packages, such as SPSS, will
easily compute coefficient α.
• You can also use test‐retest reliability method
• The test‐retest method of determining reliability involves administering the same scale or
measure to the same respondents at two separate times to test for stability. If the
measure is stable over time, the test, administered under the same conditions each time,
should obtain similar results. Test‐retest reliability represents a measure’s
repeatability
Validity
• Good measures should be both consistent and accurate.
• Reliability represents how consistent a measure is, in that the
different attempts at measuring the same thing converge on the
same point.
• Accuracy deals more with how a measure assesses the intended
concept.
• Validity is the accuracy of a measure or the extent to which a
score truthfully represents a concept.
• In other words, are we accurately measuring what we think we
are measuring?
• The four basic approaches to establishing validity are face validity,
content validity, criterion validity, and construct validity.
Face Validity
• Refers to the subjective agreement among professionals
that a scale logically reflects the concept being
measured.
• When an inspection of the test items convinces experts
that the items match the definition, the scale is said to
have face validity.
• For example, a researcher may create a questionnaire
that aims to measure depression levels in individuals. A
colleague then may look over the questions and deem
the questionnaire to be valid purely on face value.
Content Validity
• It is the degree that a measure covers the breadth of the domain of
interest
• The term content validity refers to how well a survey or test
measures the construct that it sets out to measure
• Content validity refers to the degree that a measure covers the
domain of interest.
• Do the items capture the entire scope, but not go beyond, the
concept we are measuring?
For example, suppose a professor wants to test the overall knowledge of his students
in the subject of elementary statistics. His test would have content validity if:
•The test covers every topic of elementary statistics that he taught in the class.
•The test does not cover unrelated topics such as history, economics, biology, etc.
A test lacks content validity if it doesn’t cover all aspects of a construct it sets out to
measure or if it covers topics that are unrelated to the construct in any way.
Criterion validity 
• Is the ability of a measure to correlate with other
standard measures of similar constructs or
established criteria.
• Criterion validity addresses the question, “How
well does my measure work in practice?”
• Because of this, criterion validity is sometimes
referred to as pragmatic validity.
Construct Validity
• Exists when a measure reliably measures and truthfully
represents a unique concept;
• Consists of several components including face validity,
content validity, criterion validity, convergent validity,
and discriminant validity.
• Convergent validity‐ Concepts that should be related
to one another are in fact related; highly reliable scales
contain convergent validity.
• discriminant validity‐ represents the uniqueness or
distinctiveness of a measure; a scale should not
correlate too highly with a measure of a different
construct.
Sensitivity
• The sensitivity of a scale is an important measurement
concept, particularly when changes in attitudes or other
hypothetical constructs are under investigation.
• Sensitivity refers to an instrument’s ability to accurately
measure variability in a concept.
• A dichotomous response category, such as “agree or
disagree,” does not allow the recording of subtle attitude
changes.
• Thus, adding more sensitive measure with numerous
categories like mildly disagree etc. on the scale may be
needed
• The sensitivity of a scale based on a single question or single
item can also be increased by adding questions or items
Attitude Measurement 
• Attitude is a group of opinions, values and dispositions to
act associated with a particular object or concept.
• Attitude is an enduring disposition to respond consistently
to specific aspects of the world, including actions, people,
or objects.
• One way to understand an attitude is to break it down into
its components.
• Attitude has 3 components – affective, cognitive and
behavioral
• Affective – refers to an individual’s general feelings or emotions
toward an object.
• Cognitive ‐ represents an individual’s knowledge about attributes
and their consequences
• Behavioral ‐ a predisposition to action by reflecting an
individual’s intentions.
Importance of Measuring Attitudes 
• Most managers believe that changing consumers’ or
employees’ attitudes toward their company or their
company’s products or services is a major goal.
• Marketers are interested in measuring consumers’
attitudes toward their products.
• Because modifying attitudes plays a pervasive role in
developing strategies to address these goals, the
measurement of attitudes is an important task.
• There is a wide variety of methods available for
measuring consumers’ attitudes.
• However, only limited are discussed here
Techniques for Measuring Attitudes 
• Due to lack of consensus about the exact definition of
the concept, there are different techniques to
measure attitude.
• In addition, the cognitive, affective, and behavioral
components of an attitude may be measured by
different means
• For example, sympathetic nervous system responses
may be recorded using physiological measures to
quantify affective component, but they are not good
measures of behavioral intentions
Techniques…
• One of the simplest ways of measuring attitudes is to
ask questions directly.
• For example, an attitude researcher for a calculator
manufacturer may ask respondents what they think about
the firm’s new digital solar calculator’s styling and design.
• The better option for a marketer is to use scaling
techniques.
• An attitude scale involves a series of phrases,
adjectives, or sentences about the attitude object.
• Respondents may be asked to state the degree to
which they agree or disagree with some statements
Attitude Rating Scales 
• The most common practice in business research
There are many attitude rating scales as;
1. Simple Attitude Scales ( agree/disagree; yes/no; +ve/-ve etc)
• In its most basic form, attitude scaling requires that an
individual agree or disagree with a statement or respond to a
single question.
• For example, respondents may be asked to respond the
statement “the leader of the labor union should run for re-
election”
• This type of self-rating scale merely classifies respondents
into one of two categories, thus having only the properties of
a nominal scale, and the types of mathematical analysis that
may be used with this basic scale are limited
2. Category Scales 
• Category scale is rating scale that consists of several response
categories, often providing respondents with alternatives to
indicate positions on a continuum.
• It is a more sensitive measure than a scale that has only two
response categories
• The benefits of additional points in the measurement scale should
be obvious.
• However, if the researcher tries to represent something that is
truly bipolar or dichotomous (yes/no, female/male,
member/nonmember, and so on) with more than two categories,
error may be introduced.
• Question wording is an extremely important factor in the
usefulness of these scales
3. Method of Summated Ratings: The Likert Scale
• A likert scale is a measure of attitudes designed to allow
respondents to rate how strongly they agree or disagree with
carefully constructed statements, ranging from very positive to
very negative attitudes toward some object.
• Reverse Coding
A method of making sure all the items forming a composite scale are
scored in the same direction. Negative items can be recoded into the
equivalent responses for a non‐reverse‐coded item
e.g. for the statements “I really enjoy my business research class”
and “My business research class is my least favorite class”
If students are happy with the course, they could rate 5 for the first
and consistently, they will reply 1 for the 2nd statement in a 5 point
likert scale (1 and 5; 2 and 4; 3 as it is 3)
Semantic Scale 
A series of bipolar rating scale such as “good” and “bad”,
anchor both ends (or poles) of the scale Fiedler's
Contingecy leadership.pptx
Composite Scales
• Composite scale is way of representing a latent
construct by summing or averaging respondents’
reactions to multiple items each assumed to indicate
the latent construct
Example: see the question below to Measure Attitudes toward
Patients’ Interaction with a Physician’s Service Staff

Item 3 is negatively worded that must be reverse-coded


prior to being used to create the composite scale
Problems in measuring attitude
• Problems with attitude measurement are of three
types.
• First, researchers are not clearly defining their attitude
variables.
• In other words, they are not operationalizing the constructs
that they are setting out to measure.
• Second, attitudes are not measured well.
• Finally, attitude measurement has tended to be of only
peripheral importance to researchers.
QUESTIONNAIRE DESIGN
• The research questionnaire development stage is critically
important as the information provided is only as good as the
questions asked- GIGO Concept case

• However, the importance of question wording is easily, and far


too often, overlooked.
• Businesspeople who are inexperienced at research frequently
believe that constructing a questionnaire is a simple task.
• Good questionnaire design requires far more than correct
grammar.
• People don’t understand questions just because they are
grammatically correct.
• Respondents simply may not know what is being asked.
• The question may not mean the same thing to everyone
interviewed.
Questionnaire Quality and Design: Basic Considerations
Questions must meet the basic criteria of relevance and 
accuracy; Thus, the following should be considered;
• What Should Be Asked? (relevancy, etc. 
• How should questions be phrased? (word, etc.
• Sequence of questions
• Questionnaire layout?
• Pretested? 
• Does the questionnaire need to be revised?
Wording Questions
• There are many ways to phrase questions, and many
standard question formats Could be used
• The first decision in questionnaire design is based on
the amount of freedom respondents have in
answering.
• Should the question be open‐ended, allowing the
participants freedom to choose their manner of
response, or closed, where the participants choose
their response from an already determined fixed set
of choices?
Open‐ended response questions
• Open‐ended response questions pose some problem or topic
and ask respondents to answer in their own words.
• If the question is asked in a personal interview, the interviewer
may probe for more information
• Open‐ended response questions are free‐answer questions
• Open‐ended response questions are most beneficial when the
researcher is conducting exploratory research, especially when
the range of responses is not yet known.
• Respondents are free to answer with whatever is foremost in
their minds.
• Such questions can be used to learn which words and phrases
people spontaneously give to the free‐response question
• However, the job of editing, coding, and analyzing the data is
quite extensive
Fixed‐ alternative questions
• Fixed‐ alternative questions—sometimes called
closed‐ended questions which give respondents
specific limited‐alternative responses and ask them
to choose the one closest to their own viewpoints
• However, when a researcher is unaware of the
potential responses to a question, fixed‐alternative
questions obviously cannot be used.
Types of Fixed‐Alternative Questions
• simple‐dichotomy (dichotomous) question 
• A fixed‐alternative question that requires the respondent to choose one of 
two alternatives (e.g. ye/no questions
• determinant‐choice question
• Fixed‐alternative question that requires the respondent to choose one
response from among multiple alternatives (what was your category during
your last flight? Business, economy, etc
• Frequency‐determination question
• A fixed‐alternative question that asks for an answer about general frequency
of occurrence (e.g. How frequently do you watch BBC? Every day, 2 times a
week, etc
• Checklist question
• A fixed‐alternative question that allows the respondent to provide multiple
answers to a single question by checking off items (From where did you get
AAU has IB program at Masters level? Friends, web site, tv adv, etc…
• Totally exhaustive
• All the response options are covered and that
every respondent has an alternative to check.
• The alternatives should also be mutually
exclusive‐ there should be no overlap among
categories and only one dimension of an issue
should be related to each alternative.
• For example, if you want to distribute
questionnaire for employees, the minimum
age category could be based on the
employment regulation of a country
• Age : 18‐22; 23‐27; 28‐32; etc
• You make sure there is no overlapping
Guidelines for Constructing Questions
• Developing good business research questionnaires is a
combination of art and science.
• Few hard‐and‐fast rules exist in guiding the
development of a questionnaire.
• Fortunately, research experience has yielded some
guidelines that help prevent the most common
mistakes
• Thus, designing a best questionnaire constitutes both
your experience and the guidelines
1. Avoid Complexity: Use Simple, Conversational
Language
• Words used in questionnaires should be readily
understandable to all respondents
• Remember, not all people have the vocabulary of a
college graduate.
• The technical jargon of top corporate executives
should be avoided when surveying retailers or
industrial users.
• “Brand image,” “positioning,” “marginal analysis,” and
other corporate language may not have the same
meaning for, or even be understood by, a store owner‐
operator in a retail survey
2. Avoid Leading and Loaded Questions
• Leading and loaded questions are a major source of bias
in question wording.
• A leading question suggests or implies certain answers
• loaded question‐ a question that suggests a socially desirable
answer or is emotionally charged . For example, consider the
following:
• What influences you most in your vote?
• My own informed opinion
• Media endorsement
• Family or friends,,,,,, Most probably, the answer will be the first

• A question statement may be leading because it is


phrased to reflect either the negative or the positive
aspects of an issue.
• To avoid this, you can use split‐ballot technique
(reversing for 50 percent of the sample – example
• For example, in a study on small‐car buying behavior,
one‐half of a sample of imported‐car purchasers received
a questionnaire in which they were asked to agree or
disagree with the statement,
• Small locally assembled cars are cheaper to maintain than
small imported cars
• The other half of the imported‐car owners received a questionnaire in
which the statement read,
• Small imported cars are cheaper to maintain than small
locally assembled cars

• split‐ballot technique: Using two alternative phrasings of


the same question for respective halves of a sample to
elicit a more accurate total response than would a single
phrasing.
3. Avoid Ambiguity: Be as Specific as Possible
• Items on questionnaires often are ambiguous because
they are too general
• For example, consider the following questions: Please
check which, if any, of the following sources of
information about investments you regularly use.
• What exactly does regularly mean? It can certainly vary from 
respondent to respondent. 
• Where is the cutoff? It is much better to use specific time 
periods whenever possible for example annually, every six 
month, etc
4. Avoid Double‐Barreled Items
• A question that may induce bias because it covers
two issues at once.
• For example: Did your plant use any commercial feed or
supplement for livestock or poultry in 2010?
• Yes No

5. Avoid Making Assumptions


E.g. Should BoA continue to pay its outstanding yearly
dividends?
• Yes No
• In the above question The inbuilt assumption is that
people believe the dividends paid by BoA are outstanding
6. Avoid Burdensome Questions That May Tax
the Respondent’s Memory
• A simple fact of human life is that people forget.
• Researchers writing questions about past behavior or
events should recognize that certain questions may
make serious demands on the respondent’s memory
• Did you have any overnight travel for work‐related activities
last month?
• YES NO
What Is the Best Question Sequence?
• The order of questions, or the question sequence,
may serve several functions for the researcher.
• If the opening questions are interesting, simple to
comprehend, and easy to answer, respondents’
cooperation and involvement can be maintained
throughout the questionnaire.
• Asking easy‐to‐answer questions teaches
respondents their role and builds their confidence.
• To obtain unbiased responses, asking general
questions before specific questions is
recommended and the technique is called funnel
technique
Filter and pivot questions 

• Filter question: a question that screens out


respondents who are not qualified to answer a second
question
• pivot question: a filter question used to determine
which version of a second question will be asked.
Review
PART II
Analysis and Interpretation of
Data
Chapter Seven
Basic Data Analysis for Qualitative Research
Qualitative Data

• Data that are not easily reduced to numbers

• Data that are related to concepts, opinions, values and 
behaviors of people
• Data that can be broken down through the process of 
classifying or coding; the pieces of data are then 
categorized.
What is Qualitative Data Analysis?
•Qualitative Data Analysis (QDA) is the
range of processes and procedures
whereby we move from the qualitative
data that have been collected into some
form of explanation, understanding or
interpretation of the people and
situations we are investigating.
What is Qualitative Data Analysis
•Qualitative Data analysis is a process
of breaking down data into smaller
units, determining their importance,
and putting pertinent units together in
a more general form.
Qualitative Data Collection
• Observation (field notes, checklist….)
• Interviews
• Documents (reports, meeting minutes)
• Focus Groups
• Tape Recorder
• Audio/Video Recording
• Questionnaires (open‐ended)
Coding
•Coding is a process of reducing the data
into smaller groups so they are more
manageable.
•The process also helps you to begin to
see relationships between categories
and patterns of interaction.
Coding…

• Sections of text transcripts may be marked


by the researcher in various ways
(underlining in a colored pen, given a
numerical reference, or bracketed).
Categories/Themes
• A major step in analysing qualitative data 
is coding speech/ words/text into 
meaningful categories/themes. 
• As you read and reread through the data, 
you can compile the data into categories
or themes
Categories/Themes
• A theme/category is generated when
similar issues and ideas were expressed by
participants.
• The theme or category may be labeled by a
word or expression taken directly from the
data or by one created by the researcher
because it seems to best characterize the
essence of what is being said.
Organize Data
Once the field work is over
• Researcher starts with a large set of data and seeks to
narrow into small groups of key data
• Attempt to make sense of the data as a whole
• Organizing the materials by type:
• all observations,

• all interviews,

• all field notes…….
Exploring Data
• The first step in data analysis is to explore
the data if it is complete & Legible
• Obtain a general sense of the data
• Read and write memos about all field notes,
observer comments to get an initial sense
of the data
Steps in Coding the Data
• Get a sense of the whole
• Pick one document (e.g. one interview, one 
field note….). 
• Go through it, asking the question “what is 
this person talking about?”
• Identifying text segments, placing a bracket 
around them and assigning a code word or 
phrase that describes the meaning of the 
text
Steps in Coding the Data…
• After coding an entire text, make a list of all
the code words. Group similar codes and
look for redundancy codes
• Take the list and go back to the data.
• Circle specific quotes from participants that
support the codes
• Reduce the list of codes to get five to seven
themes/categories
Identifying Themes

•Are there patterns that emerge?


‐ Events that keep repeating
themselves
‐ Key phrases that participants use
to describe their feelings
Themes
Like codes, themes have labels. Types:
• Ordinary themes – themes that a researcher might
expect to find
• Unexpected themes – themes that are surprises
• Hard‐to‐classify themes – themes that contain ideas
that do not easily fit into one theme or that overlap
• Major and minor themes – themes that represent the
major ideas and the minor secondary ideas
Summarizing your Data
• After you have coded a set of data, write a
summary of what you are learning.
• Similarly, summarize the key themes that
emerge.
• With your data coded and summarized you
are ready to look across the various
summaries and synthesize your findings
across multiple data sources.
RQ: Why do students have problems with critical thinking?
Major and Minor Themes from Teacher s’ Interview
Question Major Theme Minor Theme

What are some of the challenges 
that your students face in 
Reading Challenges Time constraints
developing their critical thinking 
skills?

How do you help to enhance the 
Need authentic learning 
critical thinking skills of your  Greater immersion in reading
students? experience

What supplementary materials 
do you encourage your students 
Need to Read newspapers Reading of Journals
to read within the subject area?
Collating Data into a Table of Coded Responses
Interview Observation Questionnaire
(teachers) (students) (students)

Reading Unwilling to work Lack of group


challenges in group cohesion

Need authentic Request teacher’s


learning assistance too
Laziness and frequently
experience
fatigue

Need to read Limited


newspaper independent
Tardiness thinking
Explanation of Themes
• Write up and explain the themes in narrative format
under the specific research question
• Use a few actual quotes from the participants responses
to validate your narrative (3 ‐5 is enough)
• Do this for each major theme that emerged from the
data
Example of Narrative Format
• RQ: Why do students have problem with critical thinking?
• Reading challenges. When asked what are the challenges
that students face in developing critical thinking skills the
teachers interviewed felt that students had reading
challenges. Many students were reading below their
grade levels with limited vocabulary. This made it difficult
for students to decipher the meaning of written work.
• The following are some direct quotation from the
participants:
• [1] “Students do not read on their own. Hence they
cannot think critically when given the opportunity”.
• [2] “Their reading level”.
• [3] “They are unable to decipher the meaning of some key
terms used in the question”.
Chapter Eight
Basic Data Analysis for Quantitative Research 
• Quantitative data is data expressing a certain quantity,
amount or range.
• Usually, there are measurement units associated with
the data, e.g. metres, in the case of the height of a
person.
• since quantitative data analysis is all about analyzing
numbers, it’s no surprise that it involves statistics.
• The analysis can vary from pretty basic calculations
(for example, averages and medians) to more
sophisticated analyses (for example, correlations and
regressions).
Steps in the Process of Quantitative
Data Analysis
• Preparing the data for analysis
• Conducting the data analysis
• Reporting the results
• Interpreting the results
Preparing the Data for Analysis: Scoring
the Data
• Score data by assigning numeric codes to responses
• Continuous scale example: score “Strongly agree” as a “5”
and “Strongly disagree” as a “1.”
• Categorical scale example: Score “Female” as a “1” and
“Male” as a “2”
• Create a codebook using information from
instruments, when possible
• Enter the coded data to the software you like
Selecting a Statistical Program
• Statistical Package for Social Sciences (SPSS)
most popular
• Other programs
• Stata
• EViews
Clean and Account for Missing 
Data
• Identify scores outside of the accepted range
(Errors)
• Participants provide scores outside the range
• Input mistakes
• Assess the database for missing data and
determine how to handle
Conducting Descriptive Analysis

• Measures of central tendency (value or score


that represents the entire distribution)
• Mean: Typically called the “average”
• Median: The value or score that divides the
top half of a distribution from the bottom half
• Mode: The value or score that occurs most
often
Conducting Descriptive Analysis (cont’d)

• Measures of variability (describes the


“spread” of the scores
• Range: The difference between the
highest and lowest scores
• Standard deviation: The standard
distance the scores are away from the
mean
Conducting Descriptive Analysis (cont’d)
• Measures of relative standing
• Percentile rank: The percentage of
participants in the distribution with scores
at or below a particular score
• Calculated score: Enables a researcher to
compare scores from different scales
• Z-Score: A popular form of the standard
score, has a mean of 0 and a standard
deviation of 1
Descriptive Statistics
Descriptive Statistics

Central Variability Relative


Tendency Standing

Mean Variance Z-Score


Median Standard Deviation Percentile
Mode Range Ranks
Inferential Statistics
Inferential Statistics

Analysis of Chi- Pearson Multiple


T-test
Variance Square Correlation Regression
Conducting Inferential Analysis
• Hypothesis testing: A procedure for making
decisions about results by comparing an observed
value of a sample with a population value to
determine if no difference or relationship exists
between the values
• Confidence interval: The range of upper and
lower statistical values that is consistent with
observed data and is likely to contain the actual
population mean
Conducting Hypothesis Tests
• Identify a null and alternative hypothesis
• Set the level of significance (alpha level) for rejecting
the null hypothesis
• Collect the data
• Compute the sample statistic
• Make a decision about rejecting or failing to reject
the hypothesis

• NB: Usually research hypothesis (alternative


hypothesis) is preferred in research as it is
formulated after review of literature
Selecting an Appropriate Statistic
• Determine the type of quantitative research
question or hypothesis you want to analyze (e.g.,
compare or relate)
• Identify the number of independent variables
• Identify the number of dependent variables
• Identify whether covariates and the number of
covariates are used in the research question or
hypothesis
Selecting an Appropriate Statistic

• Consider the scale of measurement for your


independent variable(s) in the research
question or hypothesis
• Identify the scale of measurement for the
dependent variables (e.g., continuous or
categorical)
• Determine if the distribution of the scores is
normal or skewed
Normal Curve
34% 34%

13.5%
13.5%

2.5% 2.5%

-3 -2 -1 Mean +1 +2 +3
Standard Deviations
The Normal Curve of Mean Differences of All
Possible Outcomes If the Null Hypothesis Is True
(No difference b/n statistics & Parameter

Reject the Reject the


High Probability
Null Null
Values If the Null
Hypothesis Hypothesis
Hypothesis Is True
Extremely Low alpha=.025 alpha=.025 Extremely Low
Probability Values Probability Values
If Null Hypothesis If Null Hypothesis
Is True (Critical Two-Tailed Test Is True (Critical
Region) Region)
Reporting the Results
• Tables summarize statistical information
• Title each table
• Present one table for each statistical test
• Organize data into rows and columns with simple and
clear headings
• Report notes that qualify, explain, or provide additional
information in the tables.
• Notes include information about the sample size, the
probability values used in hypothesis testing, and the
actual significance levels of the statistical test
Reporting the Results (cont’d)
• Figures (charts, pictures, drawings) portray variables and
their relationships
• Labeled with a clear title that includes the number of the
figure
• Augment rather than duplicate the text
• Convey only essential facts
• Omit visually distracting detail
• Easy to read and understand
• Consistent with and are prepared in the same style as
similar figures in the same article
• Carefully planned and prepared
Reporting the Results (cont’d)
• Present results in detail
• Report whether the hypothesis test was significant or
not
• Provide important information about the statistical test,
given the statistics
• Include language typically used in reporting statistical
results
Discussing the Results
• Summarize major results
• Review major conclusions to each question or
hypothesis
• Explain the implications of the results for the
audiences
• Triangulate with existing researches
• Explain why they occurred
• Advance limitations
• Suggest future research
• End on positive note
Structural Equation Modelling and other
Multivariate techniques
• Structural equation modeling (SEM) is a series of
statistical methods that allow complex
relationships between one or more independent
variables and one or more dependent variables.
• In general, it can be remarked that SEM allows
one to perform some type of multilevel
regression/ANOVA on factors.
• You should therefore be quite familiar with
univariate and multivariate regression/ANOVA as
well as the basics of factor analysis to implement
SEM for your data.
AKA
• SEM – Structural Equation Modeling
• CSA – Covariance Structure Analysis
• Causal Models
• Simultaneous Equations
• Path Analysis
• Confirmatory Factor Analysis
SEM in a nutshell
• Combination of factor analysis and regression
• Continuous and discrete predictors and outcomes
• Relationships among measured or latent variables
• Direct link between Path Diagrams and equations 
and fit statistics
• Models contain both measurement and path 
models
Jargon
• Measured variable
• Observed variables, indicators or  manifest variables in 
an SEM design
• Predictors and outcomes in path analysis
• Squares in the diagram
• Latent Variable
• Un‐observable variable in the model, factor, construct
• Construct driving measured variables in the 
measurement model
• Circles in the diagram
Jargon
• Error or E
• Variance left over after prediction of a measured 
variable
• Disturbance or D
• Variance left over after prediction of a factor
• Exogenous Variable
• Variable that predicts other variables
• Endogenous Variables
• A variable that is predicted by another variable
• A predicted variable is endogenous even if it in turn 
predicts another variable
Jargon
• Measurement Model
• The part of the model that relates indicators to latent 
factors
• The measurement model is the factor analytic part of 
SEM
• Path model
• This is the part of the model that relates variable or 
factors to one another (prediction)
• If no factors are in the model then only path model 
exists between indicators
Jargon
• Direct Effect
• Regression coefficients of direct prediction
• Indirect Effect
• Mediating effect of x1 on y through x2
• Confirmatory Factor Analysis
• Covariance Structure
• Relationships based on variance and covariance
• Mean Structure
• Includes means (intercepts) into the model
Diagram elements
• Single‐headed arrow →
• This is prediction
• Regression Coefficient or factor loading

• Double headed arrow ↔
• This is correlation
• Missing Paths
• Hypothesized absence of relationship
• Can also set path to zero
Path Diagram
D

E BDI Dep parent E

Negative Parental Insecure


E CES-D Depression E
Influence Attachment

E ZDRS Neglect E

Gender
SEM questions
• Does the model produce an estimated population 
covariance matrix that “fits” the sample data?
• SEM calculates many indices of fit; close fit, absolute fit, 
etc.
• Which model best fits the data?
• What is the percent of variance in the variables 
explained by the factors?
• What is the reliability of the indicators?
• What are the parameter estimates from the 
model?
SEM questions
• Are there any indirect or mediating effects in the 
model?
• Are there group differences?
• Multigroup models
• Can change in the variance (or mean) be tracked 
over time?
• Growth Curve or Latent Growth Curve Analysis
SEM questions
• Can a model be estimated with individual and 
group level components?
• Multilevel Models
• Can latent categorical variables be estimated?
• Mixture models
• Can a latent group membership be estimated from 
continuous and discrete variables?
• Latent Class Analysis
SEM questions
• Can we predict the rate at which people will drop 
out of a study or end treatment?
• Discrete‐time survival mixture analysis
• Can these techniques be combined into a huge 
mess?
• Multiple group multilevel growth curve latent class 
analysis???????
SEM limitations
• SEM is a confirmatory approach
• You need to have established theory about the 
relationships
• Cannot be used to explore possible relationships when 
you have more than a handful of variables
• Exploratory methods (e.g. model modification) can be 
used on top of the original theory
• SEM is not causal; experimental design = cause
SEM limitations
• SEM is often thought of as strictly correlational but 
can be used (like regression) with experimental 
data if you know how to use it.
• Mediation and manipulation can be tested 
• SEM is by far a very fancy technique but this does 
not make up for a bad experiment and the data can 
only be generalized to the population at hand
SEM limitations
• Biggest limitation is sample size
• It needs to be large to get stable estimates of the 
covariances/correlations
• 200 subjects for small to medium sized model
• A minimum of 10 subjects per estimated parameter
• Also affected by effect size and required power
SEM limitations
• Missing data
• Can be dealt with in the typical ways (e.g. regression, 
EM algorithm, etc.) through SPSS and data screening
• Most SEM programs will estimate missing data and run 
the model simultaneously
• Multivariate Normality and no outliers
• Screen for univariate and multivariate outliers
• SEM programs have tests for multi‐normality
• SEM programs have corrected estimators when there’s 
a violation
SEM limitations
• Linearity
• No multicollinearity/singularity
• Residuals Covariances (R minus reproduced R)
• Should be small
• Centered around zero
• Symmetric distribution of errors
• If asymmetric than some covariances are being 
estimated better than others
Practical Data Analysis, Report writing 
and the Publication ProcessEntre
Competency Data GROUP.sav
• In this section, we will use the data to run
descriptive analysis and inferential analysis
like Correlation and SEM (Regression, EFA,
CFA, Path analysis, Mediating effect,
Moderating effect, etc along with the
model fit measures and all assumptions of
regression

You might also like