Professional Documents
Culture Documents
Vaibhav Chawla Session 10
Vaibhav Chawla Session 10
Vaibhav Chawla Session 10
10
2
Single-Item Scale or
Single Question Scale
Measuring Food Quality of Zaitoon Restaurant
Strongly Strongly
Disagree Disagree Neutral Agree Agree
Food in Zaitoon Restaurant 1 2 3 4 5
is tasty
3
Multi-Item Scale
Measuring Food Quality of Zaitoon Restaurant
Strongly Strongly
Disagree Disagree Neutral Agree Agree
Food in Zaitoon Restaurant 1 2 3 4 5
is tasty
4
Single-Item Scale or
Single Question Scale
Measuring Food Quality of Zaitoon Restaurant
Strongly Strongly
Disagree Disagree Neutral Agree Agree
The food in Zaitoon 1 2 3 4 5
Restaurant has good quality
5
Single-Item Scale or
Single Question Scale
Measuring Food Quality of Zaitoon Restaurant
6
Single-Item Scale or
Single Question Scale
Measuring Food Quality of Zaitoon Restaurant
MULTI-ITEM SCALE WAS BEST.
Reasons:
(1) It measured full domain of “food quality” as compared
to single-item scale 1 where only “taste” aspect was
measured
(2) It captured information about each and every aspect
of “food quality” as compared to single-item scale 3
where “food quality” was directly measured without
information about its aspects
7
Difference between Single-Item
and Multi-Item Scales
Single Item scales are those with which
only one item/question is measured.
8
Why to Use Multi-Item Scales?
Multi-Item Scales can be superior to
Single-Item, straight forward question
- With a single question, people are
less likely to give consistent answers
over time.
- Many measured social characteristics
are broad in scope and simply cannot
be assessed with a single question.
9
Why to Use Multi-Item Scales?
An Example of a Scale Measuring
Introversion:
- I blush easily. (Strongly Agree .....................Strongly
disagree)
11
Multi-Item Scale: Example
12
Multi-Item Scale: Example
13
Multi-Item Scale: Example
Mental
Fitness
How good you are
in Maths 0----------------------------------100
Statistical Analysis
Final Scale
16
Multi-Item Scales Example:
Attitude Measurement
17
What is an Attitude?
18
Attitude
22
Affective
• Feelings or emotions
toward an object
23
Cognitive
Knowledge and
beliefs
24
Behavioral
• Predisposition to action
• Intentions
• Behavioral expectations
25
26
27
28
When Designing Attitude
Measures, Theory is Important
Example: Laziness
29
Example: Laziness (as a behavior)
is defined as delaying the activities.
1. I get up late in the morning
2. I reach my office always late
3. Most often, I complete my
work long only after the
deadline
4. Being inactive is what I enjoy
30
Example: Laziness (as an attitude) is
defined as the evaluations and the
feelings towards delaying activities
Conceptual
1. Getting up late is acceptable Definition
Operational
Definition
(Cognitive)
2. Missing deadlines is okay
(Cognitive)
3. I think being inactive is an
individual’s choice (Cognitive)
4. I like doing nothing (Affective)
31
When Designing Attitude
Measures, Theory is Important
Example: Salesperson’s
Customer Orientation
32
Example: Salesperson’s
Customer Orientation (as a
behavior)
1. I help my customers select
the best product
2. I address the queries of my
customers in a polite manner
3. I try to understand the needs
of my customers 33
Example: Salesperson’s Customer
Orientation (as an Attitude)
1. A salesperson’s job is to help the
customer select best product
(Cognitive)
2. Understanding customer needs
is exciting (Affective)
3. I like to help my customers
(Affective)
34
Concept
35
Operational Definition
• Specifies what
researchers must
do to measure the
concept under
investigation
36
Media Skepticism:
Conceptual Definition
Degree to which people are skeptical about the
reality presented by mass media. Media
skepticism varies across people, from
– those who are mildly skeptical and accept
most of what they see and hear in mass
media, to
– those who completely discount and disbelieve
the facts, values, and portrayal of reality in
mass media.
37
Media Skepticism:
Operational Definition
Please tell me how true each statement is about
the media. Is it very true, not very true, or not at
all true?
– The program was not very accurate in its
portrayal of the problem.
– Most of the story was staged for
entertainment purposes.
– The presentation was slanted and unfair.
38
Constitutive (Conceptual) vs.
Measurement (Operational) Definition
39
Developing Sound Attitude
Measures
1. Specify conceptual/constitutive
definition
2. Specify operational/measurement
definition
3. Perform item analysis
4. Perform reliability checks
5. Perform validity checks
40
Attitude Measurement Process
41
Attitude Measuring Process
Ranking: Rank order preference
Rating: Estimates magnitude of a
characteristic
Sorting: Arrange or classify concepts
Choice: Selection of preferred
alternative
42
Ranking Tasks
1. I love my bike
2. My bike is one of my favorite
possessions
3. My bike is fun to use
45
Example: Attitude Scale Using Rating
Attitude towards the Ad (Cognitive)
The ad …..
1. Was believable
2. Was interesting
3. Was informative
4. Was well-designed
5. Was easy-to-follow
6. Was attention-getting
7. clear
46
Sorting Tasks
47
Choice Tasks
48
Scale Evaluation
Fig. 9.5
Scale Evaluation
49
Scale Evaluation
Measurement Reliability
and Validity
50
Scale Evaluation
51
Measurement Accuracy
XO = X T + X S + X R
where
• Ability of a scale to
measure what was
intended to be measured
54
55
Rulers are Reliable and Valid
56
Potential Sources of Error on
Measurement
57
Approaches to Reliability
Assessment
• Test-retest
– identical scale items administered at two
different times to same set of respondents
– assess (via correlation) if respondents
give similar answers
58
Approaches to Reliability
Assessment
• Alternative forms
– two equivalent forms of the scale are
constructed
– same respondents are measured at two different
times, with a different form being used each time
– assess (via correlation) if respondents give
similar answers
– Note. Hardly ever practical
59
Approaches to Reliability Assessment
60
Approaches to Validity Assessment
• The validity of a scale may be defined as the extent to which
differences in observed scale scores reflect true differences
among objects on the characteristic being measured, rather
than systematic or random error. Perfect validity requires
that there be no measurement error (XO = XT, XR = 0, XS = 0).
• Content validity is a subjective but systematic evaluation of
how well the content of a scale represents the measurement
task at hand.
• Criterion validity reflects whether a scale performs as
expected in relation to other variables selected (criterion
variables) as meaningful criteria.
61
Approaches to Validity Assessment
Construct validity is evidenced if we can establish –
convergent validity, discriminant validity and nomological
validity
64
References
• Prof. N. K. Malhotra’s Textbook and Slides
• Dr. Michael Hyman Slides
65