Professional Documents
Culture Documents
Lecture 05 Measurement - Understanding Construct Validity - I
Lecture 05 Measurement - Understanding Construct Validity - I
Lecture 05 Measurement - Understanding Construct Validity - I
• Construct Definition
• Construct Validation
2
Construct Definition
3
Measurement Foundations: Validity and Validation
Construct Definitions
•Construct Domain
•Nomological Networks
4
Measurement Foundations: Validity and Validation
Cont.
Construct Domain
5
Measurement Foundations: Validity and Validation
Cont.
Nomological Networks
It specifies how values of the construct should differ across cases and
conditions.
For example, should the construct remain about the same over time,
or is it expected to vary? Some constructs, such as human
intelligence, are expected to be relatively stable over time.
6
Measurement Foundations: Validity and Validation
Cont.
7
Measurement Foundations: Validity and Validation
Cont.
Example
8
Measurement Foundations: Validity and Validation
Cont.
9
Measurement Foundations: Validity and Validation
Cont.
10
Measurement Foundations: Validity and Validation
Cont.
People with more education will have higher expectations and hence
lower computer satisfaction than those with less education.
11
Measurement Foundations: Validity and Validation
Cont.
12
Construct Validity
Challenges
13
Measurement Foundations: Validity and Validation
Cont.
•Random Errors
•Systematic Errors
14
Measurement Foundations: Validity and Validation
Cont.
Random Errors
15
Measurement Foundations: Validity and Validation
Cont.
16
Measurement Foundations: Validity and Validation
Cont.
Systematic Errors
Items from the measure shown in previous slide also suggest two
types of systematic errors that reduce construct validity. Items 5 and
6 ask about satisfaction with the purchasing experience; these are not
part of the researcher’s conceptual definition of the construct. A
measure is contaminated if it captures characteristics not
specifically included in the definition of the construct. A measure can
also have systematic errors because it is deficient—that is, when it
does not capture the entire construct domain. The measure is
deficient because there are no items capturing satisfaction with
computer durability. 17
Construct Validation
18
Measurement Foundations: Validity and Validation
Cont.
Construct Validation
Construct validity cannot be assessed directly.
However, there are procedures available to help researchers develop
construct valid measures and to help evaluate those measures once
developed. Six such procedures are described here.
•Content Validity
•Reliability
•Convergent Validity
•Discriminant Validity
•Criterion-Related Validity
•Investigating Nomological Networks 19
Measurement Foundations: Validity and Validation
Cont.
Content Validity
A measure is face valid when its items appear to reflect the construct
as defined conceptually. In contrast to content validation, estimates of
face validity are usually obtained from persons similar to those who
serve as research participants.
20
Measurement Foundations: Validity and Validation
Cont.
Reliability
21
Measurement Foundations: Validity and Validation
Cont.
…
Types of Reliability
There are three common contexts in which researchers seek to assess
the reliability of measurement.
1. Internal consistency reliability refers to the similarity of item
scores obtained on a measure that has multiple items. It can be
assessed when items are intended to measure a single construct.
2. Interrater reliability indicates the degree to which a group of
observers or raters provide consistent evaluations.
3. Stability reliability refers to the consistency of measurement
results across time. 22
Measurement Foundations: Validity and Validation
Cont.
…
Reliability and Construct Validity
Convergent Validity
24
Measurement Foundations: Validity and Validation
Cont.
Discriminant Validity
25
Measurement Foundations: Validity and Validation
Cont.
Criterion-Related Validity
26
Measurement Foundations: Validity and Validation
Cont.
Example
29