Download as pdf or txt
Download as pdf or txt
You are on page 1of 19

Business Research Method

Unit 3

By
Dr. Anand Vyas
Scaling & measurement techniques: Concept
of Measurement:
• Measurement can be defined as a process of associating numbers to
observations obtained in a research study. The variables associated
with a study are classified into two basic categories:

• Quantitative/ Numeric
• Qualitative / Categorical
Need of Measurement;
• The goal of measurement is to get reliable data with which to
answer research questions and assess theories of change. Inaccurate
measurement can lead to unreliable data, from which it is difficult to
draw valid conclusions.
Problems in measurement in management
research
• Respondent: At times the respondent may be reluctant to express strong negative feelings or it is just
possible that he may have very little knowledge but may not admit his ignorance. All this reluctance is
likely to result in an interview of ‘guesses.’ Transient factors like fatigue, boredom, anxiety, etc. may
limit the ability of the respondent to respond accurately and fully.
• Situation: Situational factors may also come in the way of correct measurement. Any condition which
places a strain on interview can have serious effects on the interviewer-respondent rapport. For
instance, if someone else is present, he can distort responses by joining in or merely by being present.
If the respondent feels that anonymity is not assured, he may be reluctant to express certain feelings.
• Measurer: The interviewer can distort responses by rewording or reordering questions. His behaviour,
style and looks may encourage or discourage certain replies from respondents. Careless mechanical
processing may distort the findings. Errors may also creep in because of incorrect coding, faulty
tabulation and/or statistical calculations, particularly in the data-analysis stage.
• Instrument: Error may arise because of the defective measuring instrument. The use of complex words,
beyond the comprehension of the respondent, ambiguous meanings, poor printing, inadequate space
for replies, response choice omissions, etc. are a few things that make the measuring instrument
defective and may result in measurement errors. Another type of instrument deficiency is the poor
sampling of the universe of items of concern.
• Researcher must know that correct measurement depends on successfully meeting all of the problems
listed above. He must, to the extent possible, try to eliminate, neutralize or otherwise deal with all the
possible sources of error so that the final results may not be contaminated.
• RELIABILITY
• A test must also be reliable. Reliability is “Self-correlation of the test.” It shows
the extent to which the results obtained are consisted when the test is
administered. Once or more than once on the same sample with a reasonable
gap. Consistency in results obtained in a single administration is the index of
internal consistency of the test and consistency in results obtained upon testing
and retesting is the index of temporal consistency. Reliability thus, includes both
internal consistency as well as temporal consistency. A test to be called sound
must be reliable because reliability indicates the extent to which the scores
obtained in the test are free from such internal defects of standardization, which
are likely to produce errors of measurement.
• Types of Reliability:
• (i) Internal reliability
• (ii) External reliability
• Internal Reliability; Internal reliability assesses the consistency of results across
items within a test.
• External Reliability; External reliability refers to the extent to which a measure
varies from one use to another.
• VALIDITY
• Validity is another prerequisite for a test to be sound. Validity
indicates the extent to which the test measure what it intends to
measure, when compared with some outside independent criteria. In
other words it is the correlation of the test with some outside criteria.
The criteria should be independent one and should be regarded as
the best index of trait or ability being measured by the test. Generally,
validity of a test is dependent upon the reliability because a test
which yields inconsistent results (poor reliability) is ordinarily not
expected to correlate with some outside independent criteria.
Levels of measurement – Nominal, Ordinal, Interval, Ratio.
Attitude Scaling Techniques: Concept of Scale
• Attitudes are individual mental processes which determine both the
actual and potential response of each person in a social world. An
attitude is always directed toward some object and therefore,
attitude is the state of mind of the individual toward a value.
Rating Scales viz. Likert Scales,
• Rating Scale
• Rating scale is defined as a closed-ended survey question used to
represent respondent feedback in a comparative form for specific
particular features/products/services. It is one of the most
established question types for online and offline surveys where
survey respondents are expected to rate an attribute or feature.
Rating scale is a variant of the popular multiple-choice question which
is widely used to gather information that provides relative
information about a specific topic.
Likert Scales
• A Likert Scale is a scale used to measure the attitude wherein the
respondents are asked to indicate the level of agreement or
disagreement with the statements related to the stimulus objects.
• The Likert Scale was named after its developer, Rensis Likert. It is
typically a five response category scale ranging from “strongly
disagree” to “strongly agree”. The purpose of a Likert scale is to
identify the attitude of people towards the given stimulus objects by
asking them the extent to which they agree or disagree with them.
Semantic Differential Scales,
• Semantic Differential Scale is a survey or questionnaire rating scale that
asks people to rate a product, company, brand or any “entity” within the
frames of a multi-point rating options. These survey answering options are
grammatically on opposite adjectives at each end. For example, love / hate,
satisfied / unsatisfied and likely to return / unlikely to return with
intermediate options in between.
• Surveys or questionnaires using Semantic Differential Scale is the most
reliable way to get information on people’s emotional attitude towards a
topic of interest.
• A Likert scale will provide you with the participants' agreement or
disagreement with the asked statements. A Semantic Differential scale
will provide you with information on where your participants' view lies
on a continuum between two contrasting adjectives
Constant Sum Scales,
• A sum scale is a type of question used in a market research constant
survey in which respondents are required to divide a specific number
of points or percent’s as part of a total sum. The allocation of points
are divided to detail the variance and weight of each category.
• Q: Using 100 points, please apply a number of points to each factor
based on how important each are to you when buying a home. You
must total 100 points divided among the factors.
• A: Price, Location, School District, Inside Features, etc.
Graphic Rating Scales –
• Graphic Rating Scale is a type of performance appraisal method. In this method
traits or behavior’s that are important for effective performance are listed out
and each employee is rated against these traits. The rating helps employers to
quantify the behaviors displayed by its employees.

• How would you rate the individual in terms of quality of work, neatness and
accuracy?
• (i) Non-Existent: Careless Worker. Tends to repeat similar mistakes
• (ii) Average: Work is sometimes unsatisfactory due to untidiness
• (iii) Good: Work is acceptable. Not many errors
• (iv) Very Good: Reliable worker. Good quality of work. Checks work and observes.
• (v) Excellent: Work is of high quality. Errors are rare, if any. Little wasted effort.
Ranking Scales – A ranking scale is a survey question tool that
measures people’s preferences by asking them to rank their
views on a list of related items. Using these scales can help your
business establish what matters and what doesn’t matter to
either external or internal stakeholders. You could use ranking
scale questions to evaluate customer satisfaction or to assess
ways to motivate your employees, for example. Ranking scales
can be a source of useful information, but they do have some
disadvantages.
Paired comparison & Forced Ranking –
Concept and Application.
• Paired comparison involves pairwise comparison i.e., comparing
entities in pairs to judge which is preferable or has a certain level of
some property. LL Thurstone first established the scientific approach
to using this approach for measurement.

• Forced ranking, also known as a vitality curve, is a controversial


management tool which measures, ranks and grades employees’
work performance based on their comparison with each other
instead of against fixed standards.
To apply Paired Comparison Method, it’s wise to use a large sheet of paper or a flip chart. Follow the steps below
one by one for the analysis to work best.
Step 1: Creating table
Make a table with rows and columns and fill out the options that will be compared to one another in the first row
and the first column (the headers of the rows and columns). The empty cells will stay empty for now. If there are 4
options, there are 4 rows and 4 columns and 16 cells; when there are 3 options, you get 3 rows and 3 columns and
9 cells, etcetera.
Step 2: Assigning letters
Every option is now assigned a letter (A, B, C etcetera). The options are mentioned in the headers of the rows and
columns and each now has a letter so the options can be properly compared to each other.
Step 3: Blocking cells
It’s important to block out the cells in the table in which the same options overlap. Cells that contain a comparison
that has been displayed earlier in the table also have to be blocked out. Every comparison should only be made
once.
Step 4: Comparing options
The cells that are left will now compare the options in the rows to the options in the columns. The letter of the
most important option will be noted. For example, when A is compared to C and C is a more important option, a C
will be written down in that cell.
Step 5: Rating options
The difference in importance will now get a rating that will range, for example, from 0 (no difference) to 3
(important difference).
Step 6: Listing results
The results are now consolidated by adding all values for each of the options in question. If necessary, these totals
can be converted to percentages.

You might also like