Lecture 05 Measurement - Understanding Construct Validity - I

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 29

Doctor of Philosophy (Management Science)

Designing Organizational Research


Lecture 05
Measurement: Understanding Construct Validity - I

Dr. Muhammad Zahid Iqbal


Associate Professor
Department of Management Sciences
COMSATS Institute of Information Technology, Islamabad 1
Contents

• Construct Definition

• Construct Validity Challenges

• Construct Validation

2
Construct Definition

3
Measurement Foundations: Validity and Validation

Construct Definitions

Defining constructs at a conceptual level is an essential first step in


the development of construct valid measures. The most useful
conceptual definitions have two elements.

•Construct Domain
•Nomological Networks

4
Measurement Foundations: Validity and Validation
Cont.

Construct Domain

Useful conceptual definitions identify the nature of the construct by


specifying its meaning. This element explains what a researcher has
in mind for the construct; it contains a dictionary-like statement that
describes the construct domain. It speaks to what is included in the
construct. If there is potential confusion about what is not included,
this too should be addressed in the definition.

5
Measurement Foundations: Validity and Validation
Cont.

Nomological Networks

It specifies how values of the construct should differ across cases and
conditions.

For example, should the construct remain about the same over time,
or is it expected to vary? Some constructs, such as human
intelligence, are expected to be relatively stable over time.

6
Measurement Foundations: Validity and Validation
Cont.

It also specifies how the construct of interest relates to other


constructs in a broader web of relationships called a nomological
network. A nomological network is identical to a conceptual model
in form; it differs only in purpose. In contrast to conceptual models,
nomological networks are used to draw inferences about constructs
and construct validity.

7
Measurement Foundations: Validity and Validation
Cont.

Construct Definition Illustration

Example

8
Measurement Foundations: Validity and Validation
Cont.

Suppose a researcher seeks to measure satisfaction that owners


experience with personal laptop or desktop computers. The
researcher may be interested in characteristics that influence
satisfaction and thus view it as a dependent variable. Or, the
researcher may be interested in consequences of satisfaction with
computer ownership and thus view satisfaction as an independent
variable. In either event, the researcher defines the construct as:

9
Measurement Foundations: Validity and Validation
Cont.

Construct Definition - Personal Computer Satisfaction


Personal computer satisfaction is an emotional response resulting
from an evaluation of the speed, durability, and initial price, but
not the appearance of a personal computer. This evaluation is
expected to depend on variation in the actual characteristics of the
computer (e.g., speed) and on the expectations a participant has
about those characteristics. When characteristics meet or exceed
expectations, the evaluation is expected to be positive (satisfaction).
When characteristics do not come up to expectations, the evaluation
is expected to be negative (dissatisfaction).

10
Measurement Foundations: Validity and Validation
Cont.

People with more education will have higher expectations and hence
lower computer satisfaction than those with less education.

11
Measurement Foundations: Validity and Validation
Cont.

12
Construct Validity
Challenges

13
Measurement Foundations: Validity and Validation
Cont.

Construct Validity Challenges


Two major challenges confront construct validity. One involves
random errors, completely unsystematic variation in scores. A
second, more difficult, challenge involves systematic errors,
consistent differences between scores obtained from a measure and
meaning as defined by the construct.

•Random Errors
•Systematic Errors

14
Measurement Foundations: Validity and Validation
Cont.

Random Errors

Random or unsystematic errors are nearly always present in


measurement. Fortunately, there are methods for identifying them
and procedures to neutralize their adverse consequences. However,
one such procedure is illustrated by the measure shown in an
example of one-dimensional scale on next slide. The questionnaire
has more than one item related to each characteristic of computer
ownership. Items 1 and 2 relate to satisfaction with price, items 3 and
4 relate to satisfaction with speed, and so forth.

15
Measurement Foundations: Validity and Validation
Cont.

16
Measurement Foundations: Validity and Validation
Cont.

Systematic Errors

Items from the measure shown in previous slide also suggest two
types of systematic errors that reduce construct validity. Items 5 and
6 ask about satisfaction with the purchasing experience; these are not
part of the researcher’s conceptual definition of the construct. A
measure is contaminated if it captures characteristics not
specifically included in the definition of the construct. A measure can
also have systematic errors because it is deficient—that is, when it
does not capture the entire construct domain. The measure is
deficient because there are no items capturing satisfaction with
computer durability. 17
Construct Validation

18
Measurement Foundations: Validity and Validation
Cont.

Construct Validation
Construct validity cannot be assessed directly.
However, there are procedures available to help researchers develop
construct valid measures and to help evaluate those measures once
developed. Six such procedures are described here.
•Content Validity
•Reliability
•Convergent Validity
•Discriminant Validity
•Criterion-Related Validity
•Investigating Nomological Networks 19
Measurement Foundations: Validity and Validation
Cont.

Content Validity

A measure is content valid when its items are judged to accurately


reflect the domain of the construct as defined conceptually. Content
validation ordinarily has experts in the subject matter of interest
provide assessments of content validity.

A measure is face valid when its items appear to reflect the construct
as defined conceptually. In contrast to content validation, estimates of
face validity are usually obtained from persons similar to those who
serve as research participants.
20
Measurement Foundations: Validity and Validation
Cont.

Reliability

Reliability refers to the consistent variance of a measure; it thus


indicates the degree to which measurement scores are free of random
errors. Reliability statistics provide estimates of the proportion of
the total variability in a set of scores that is true or systematic.

21
Measurement Foundations: Validity and Validation
Cont.


Types of Reliability
There are three common contexts in which researchers seek to assess
the reliability of measurement.
1. Internal consistency reliability refers to the similarity of item
scores obtained on a measure that has multiple items. It can be
assessed when items are intended to measure a single construct.
2. Interrater reliability indicates the degree to which a group of
observers or raters provide consistent evaluations.
3. Stability reliability refers to the consistency of measurement
results across time. 22
Measurement Foundations: Validity and Validation
Cont.


Reliability and Construct Validity

Reliability speaks only to a measure’s freedom from random errors.


It does not address systematic errors involving contamination or
deficiency. Reliability is thus necessary for construct validity but not
sufficient. It is necessary because unreliable variance must be
construct invalid. It is not sufficient because systematic variance may
be contaminated and because reliability simply does not account for
deficiency. In short, reliability addresses only whether scores are
consistent; it does not address whether scores capture a particular
construct as defined conceptually. 23
Measurement Foundations: Validity and Validation
Cont.

Convergent Validity

Convergent validity is present when there is a high correspondence


between scores from two or more different measures of the same
construct.

Evidence of convergent validity adds to a researcher’s confidence in


the construct validity of measures.

24
Measurement Foundations: Validity and Validation
Cont.

Discriminant Validity

Discriminant validity is inferred when scores from measures of


different constructs do not converge. It thus provides information
about whether scores from a measure of a construct are unique rather
than contaminated by other constructs.

An investigation of discriminant validity is particularly important


when an investigator develops a measure of a new construct that may
be redundant with other more thoroughly researched constructs.

25
Measurement Foundations: Validity and Validation
Cont.

Criterion-Related Validity

Criterion-related validity is present when scores on a measure are


related to scores on another measure that better reflects the construct
of interest.

The term criterion-related validity refers to the fact that certain


criteria are needed in order to determine if the measure’s results
describe truthfully the empirical situation

26
Measurement Foundations: Validity and Validation
Cont.

Example

An HR manager is interested in developing a measure of a construct


that represents the effectiveness of employees performing some
complex task. Criterion-related validity can be assessed by
comparing “scores obtained from supervisors” with “scores obtained
from a panel of job experts” who carefully observe a small sample of
employees over a two-week time period. The manager reasons that
the job experts provide valid assessments of employee performance.
A strong relationship between supervisor and job expert scores
(criterion-related validity) provides evidence that supervisor scores
can be used among the entire group of employees performing this
task. 27
Measurement Foundations: Validity and Validation
Cont.

Investigating Nomological Networks

Nomological networks have been described as relationships between


a construct under measurement consideration and other constructs.
The left-hand side of Figure, shown on the next slide, reveals a
situation in which there are three factors associated with factor A. On
the right-hand side of the figure is a mirror image of the
corresponding measures and the relationships between the
measurement results. If the theoretical relationships on the left-hand
side match the empirical relationships on the right-hand side, the
measure ‘a’ has construct validity.
For example, Correlates 28
Measurement Foundations: Validity and Validation
Cont.

29

You might also like