Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 29

SGHE5033 ASSESSMENT IN HIGHER

EDUCATION

(Test Validity)

Prof. Dr. Arsaythamby Veloo


School of Education
UUM College of Arts and Sciences
Reliability
(Accuracy in
measurement)

Goodness
of data
Validity
(Are we measuring
the right thing?)

Logical validity Criterion-related Congruent validity


(content) validity (Construct)

Face validity Predictive Concurrent Convergent Discriminant


Test Validity

 Construct validity

 Content validity / face validity (Internal)

 Criterian-related validity (External)


Construct validity
 Translation validity
* Face validity
* Content validity
 Criterian-related validity
* Predictive validity
* Concurrent validity
* Convergent validity
* Discriminant validity
Definition
 Validity-Does the test measure what it is
supposed to measure?

 Validity refers to the accuracy of a measure

 It is the extent to which a measuring


instrument actually measures the underlying
concept it is suppose to measure
Definition (cont)
 It refers to the extent of matching,
congruence, or “goodness of fit” between
an operational definition and the concept
it is purported to measure

 An instrument is said to be valid if it taps


the concept it is suppose to measure. It is
designed to answer the question- is it
true
Content validity (Face
validity)
 This is the extent to which a measuring
instrument reflects a specific domain of
content

 It can also be viewed as the sampling


adequacy of the content of a
phenomena being measured
Content validity (cont)
 This type of validity is often used in the
assessment of various educational and
psychological tests

 Content validation then, is essentially


judgmental
Problem With Content
Validity
 Specifying the full domain of content
relevant to a particular measurement
situation

 No agreed upon criterion for determining


content validity
Criterion-Related Validity

 This is at issue when the purpose is to


use an instrument to estimate some
important form of behavior that is
eternal to the measuring instrument
itself, the latter being referred to as the
criterion
Criterion-Related Validity
(cont)

 A test used to select students for


special programs of study in high school
is valid only to the extent that it actually
predicts performance in those programs
Criterion-Related Validity

 Concurrent validity

 Predictive validity
Concurrent Validity
 Refers to the ability of a measure to
accurately predict the current situation
or status of an individual

 Where the instrument being assessed is


compared to some already existing
criterion, such as the results of another
measuring device
Predictive Validity
 This is where an instrument is used to
predict some future state of affairs.

 An example here is the various


educational tests used for selection
purposes in different occupations and
schools; the SAT, the GRE, etc.
Predictive Validity (cont)

 If people who score high on the SAT or


GRE do better in college than low-
scorers, then the SAT or GRE test is
presumably a valid measure of
Scholastic aptitude (in the case of SAT)
Predictive Validity (cont)

 The prison system uses this to assess


criminal who are less likely to recidivist.
They use factors such as age, type of
crime, family background, etc.
Problems with Criterion-
Related Validity
 From the definition of criterion-related
validity, it can be inferred that the
degree of criterion-relation validity
depends on the extent of the
correspondence between the test and
the criterion
Problems with Criterion-
Related Validity (cont)
 Most measures in the social sciences
have no well delimited relevant criterion
variables against which measures can
be reasonably evaluated
Construct Validity

 This is evaluated by examining the


degree to which certain explanatory
concepts (constructs) derived from
theory, account for performance on a
measure
Construct validity (cont)

 A type of validity which depicts how a


particular measure relates to other
measures consistent with theoretically
derived hypothesis concerning the
concepts or constructs that are being
measured
Construct Validity (cont)
 The process of construct validation is
theory-laden
Type of Construct Validity

 Convergent validity

 Discriminant validity
Convergent Validity

 This is based on the idea that two


instruments that are valid measures of
the same concept should correlate
rather highly with one another or yield
similar results even though they are
different instruments.
Discriminant Validity

 This is based on the idea that two


instruments, although similar to one
another, should not correlate highly if
they measure different concepts
Discriminant Validity
(cont)

 This approach thus involves the


simultaneous assessment of numerous
instruments (multimethod) and
numerous concepts (multitrait) through
the computation of intercorrelations
Problems with Construct
validity
 The process of validation is theory-
laden. It is thus almost impossible to
‘validate’ a measure of a concept unless
there is in existence a theoretical
network that surrounds it
The Reliability- Validity
Relationship
 An instrument that is valid always
reliable
 An instrument that is not valid may or
may not be reliable
 An instrument that is reliable may or
may not be valid
The Reliability- Validity
Relationship (cont)
 An instrument that is not reliable is
never valid
 Reliability is a necessary but not
sufficient, condition for good
measurement
THANK YOU

You might also like