Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 22

Chapter Four

Research Design

1
• A research design is the arrangement of conditions for the
collection and analysis of data in a manner that aims to
address the research problem.
• The research design should be inline with:
– What is the study about? (Problem definition)
– Why is the study being made? (Justification)
– Where will the study be carried out? (Location)
– What type of data is required? (Quanti, Qual, Pri, Sec)
– Where can the required data be found (target
population)
– What will be the sample design (technique chosen)
– What techniques of data collection will be used?
(observation, interview, questionnaire, or document
analysis)
– How will the data be analyzed (Data Analysis techniques
2
& tools to be employed)
• We may split the overall research design into
three:
– The sampling design - which deals with the method of
selecting items to be observed.
– The statistical design - which concerns with the
question of how many items are to be observed and
how the information and data gathered are to be
analyzed.
– The operational design - which deals with the
techniques by which the procedures specified in the
sampling, Statistical and observational designs can be
carried out.

3
Important concepts in research design
• Variable: a concept which can take on different
values
– Continuous variable:
• A quantitative variable for which all values with in
some range are possible.
• a variable which can assume any numerical value
within a specific range. (eg. Age, weight, depth etc)
– Discrete variable:
• A quantitative variable which does not take on all
values in a continuum, often the variables can
assume integer values only/ the individual values fall
on the scale only with distinct gaps. (eg. # of children)
4
Important concepts …
– Dependent variable:
– Independent variable:
– Extraneous variables: affect the Dependent variable.
– Control variable: a variable used to minimize the
effects of extraneous variables.
• Experimental and Control Groups (in
experimental hypothesis testing research)
– Control group: a group exposed to usual conditions
– Experimental group: a group exposed to some special
conditions
• Treatments: the different conditions under which
experimental and control groups are put.
5
Important concepts
Example:
• An Investigation of users’ perception of Internet
Banking in Ethiopia.

Control Group Exp. Group

A B
Usual Conditions Special Conditions
(Usual study prog.) (Special study prog.)

6
Sampling Design
• Population: the entire group under study as
defined by research objectives.
– Sometimes called the “universe.” or
– Reference population

7
• Sample: a subset of the population that
should represent the entire group.
• Census: is the counting of the complete
population.

8
Two Types of Sampling Methods:
– Probability sampling: members of the population
have a known chance of being selected
 Simple random sampling
 Systematic sampling
 Cluster Sampling
 Stratified sampling
 Muti-stage sampling

– Non-probability sampling: the chances of selecting


members from the population are unknown.
 Snowball sampling
 Purposive sampling
 Convenience sampling
 Quota sampling (proportional & non-proportional)
 Judgmental sampling

9
Steps in Sampling Process
– Defining the population
– Specifying the sampling unit
– Specifying the sampling frame ( the means of
representing the elements of the population)
– Specifying the sampling method
– Determining the sampling size
– Specifying the sampling plan
– Selecting the sample
10
Measurement & Measurement Scales
• Measurement is the process through which researchers
describe, explain, and predict the phenomena and
constructs of our daily experiences
• The concept of measurement is important in a research in
two key areas:
– Enables researchers to quantify abstract constructs &
Variables.
– used to analyze sophisticated statistical data
Non metric Data vs. Metric Data
– Non metric data (also referred to as qualitative data)-
which cannot be quantified and are predominantly used
to describe and categorize.
– Metric data (also referred to as quantitative data)-are
used to examine amounts and magnitudes. 11
Four main scales of measurement
1. Nominal scales
2. Ordinal scales
3. Interval scales, and
4. Ratio scales.
• Nominal and ordinal scales are non metric
• Interval & ratio scales  metric
1. Nominal scales:- are the least sophisticated type of
measurement /used to qualitatively classify /categorize.
– Measures identity only
– They have no absolute zero point and cannot be
ordered in a quantitative sequence, and there is no
equal unit of measurement between categories.
– Example: gender, religion, ethnicity, Marital status, etc…
12
2. Ordinal scale:- measurement is characterized by
the ability to measure a variable in terms of both
identity and magnitude.
– Measures relative magnitude in relation to other variables.
– Ordinal scales represent the rank or ordering of variables
Example - finishing position of runners in a race,
educational qualification, academic rank etc…

3. Interval scale:- builds on ordinal measurement by


providing information about both order and distance between
values of variables.
 Measure variables in terms of identity, magnitude & distance
13
 Example: height, weight, age etc … expressed in terms intervals
4. Ratio scale:- The properties of the ratio scale are
identical to those of the interval scale, except that the
ratio scale has an absolute zero point, which means
that all mathematical operations are possible.
– Highest level of measurement.
– Allow for the use of sophisticated statistical techniques.
• Example –Money- It is possible to have no (or zero) money
—a zero balance in a checking account.

14
Instrument Design
• Based on hypotheses identified in   in the stages of
research process
• Most important question researcher  can ask before
begin writing
• Survey-type instruments can yield  three types of
information.
A. Reports of Fact - self-disclosure of some objective
information (e.g., age, gender, education, behavior)
B. Ratings of Opinion or Preference -evaluative
response to statement (e.g., satisfaction, agreement,
like\dislike)
C. Reports of Intended Behavior - self-disclosure of
motivation or intention (e.g., likeliness, willingness)15
How will administration be accomplished? 
A. Self-administered surveys - subject responds to
printed questions  (e.g., group or mail surveys)
Advantages
– Ask questions with long, complex or visual response
categories
– Ask sequences of similar questions
– Respondent does not share answers with immediate
person
Disadvantages
– Careful questionnaire design is required
– Open response questions not useful
– Good reading and writing skills by respondents are
needed
– Very little quality control over administration 16
B. Other-administered surveys
– Subject responds to questions directly posed by
researcher (e.g., interview, phone survey)
Advantages
– Most effective in joining cooperation   (initial and
length)
– Opportunity to answer respondent questions and
ensure quality of data (e.g., probe adequate
answers, answer all questions)
– Rapport and confidence building possible
Disadvantages
– Cost and time requirements
– Adequate training of staff
– Accessibility of sample 17
A. Open-ended Questions - permits subject freedom
to answer question in own words (without pre-
specified alternatives
B. B. Close-ended Questions -  subject selects from
list of pre-determined, acceptable responses

18
Types of closed-ended questions
1.Checklists - respondent selects certain number of
pre-specified categories (nominal data)
2. Two-way (Forced Choice) -Respondent must
select between two alternatives (crude ordinal\
nominal) Yes or No type
3. Ranked - respondent must place items in order of
importance or value (ordinal)
 4. Multiple-Choice (Likert Scale) – respondent selects
between range of alternatives along pre-specified
continuum (ordinal\interval?)
Strongly Agree  Agree   Neutral  Disagree  Strongly Disagree 
  1    2      3     4      5

19
Writing good survey questions
• Good Question wording:
– Simple sentences 
– No double negatives
– Eliminate vagueness (poorly defined terms)
– Eliminate Objectionable\Irrelevant questions

20
• Pilot Testing
– Pre empirical investigation to observe the reliability and
validity of the instrument &
– To modify the items of the instrument accordingly.
• Reliability of the Instrument
– Deals with the extent to which the instrument yields the
same results on repeated trials.
– Reliability refers to the consistency, stability or
equivalence of a number of measurements taken using
the same measurement method on the same subject.
– If repeated measurements are highly consistent (even
identical), then there is a high degree of reliability with the
measurement method.
– If the variations among the repeated measurements are
large, then the reliability of the instrument is low. 21
• Validity of the Instrument
– Validity of an instrument refers to the degree to which a
study accurately reflects or assesses the specific concept
that the researcher is attempting to measure through
insuring the data collection instrument’s ability to collect
the intended data fully and appropriately.
– In short, validity is concerned with the study's success at
measuring what the researchers set out to measure and
indicates the extent to which the data collected reflects the
phenomena under investigation.

22

You might also like