Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

RELIABILITY OF A TEST

Reliability refers to the consistency with which it yields the same rank for individuals who
take the test more than once (Kubiszyn and Borich, 2007). That is, how consistent test results
or other assessment results from one measurement to another. We can say that a test is
reliable when it can be used to predict practically the same scores when test administered
twice to the same group of students and with a reliability index of 0.60 or above.

The reliability of a test can be determined by means of Pearson coefficient, Spearman-Brown


formula and Kuder-Richardson.

Factors Affecting Reliability of a Test

1. Length of the test


2. Moderate item difficulty
3. Objective scoring
4. Heterogeneity of the student group
5. Limited time

Four Methods of Establishing Reliability of a Test

1. Test-retest Method. A type of reliability determined by administering the same test


twice to the same group of students with any time interval between the tests. The
results of the test scores are correlated using the Pearson product correlation
coefficient ® and this correlation coefficient provides a measure of stability. This
indicates how stable the test result over a period of time.
2. Equivalent Form. A type of reliability determined by administering two different
but equivalent forms of the test (also called parallel or alternate forms).to the same
group of students in close succession. The equivalent forms are constructed to the
same set of specifications that is similar in content, type of items and difficulty. The
results of the test scores are correlated using the Pearson product correlation
coefficient ® and this correlation coefficient provides a measure of the degree to
which generalization about the performance of students from one assessment to
another assessment is justified. It measures the equivalence of the tests.

3. Split-half Method. Administer test once and score two equivalent halves of the test.
To split the test into halves that are equivalent, the usual procedure is to score the
even-numbered and the odd-numbered test item separately. This provides two
scores for each student. The results of the test scores are correlated using the
Spearman-Brown formula and this correlation coeffi- cient provides a measure of
internal consistency. It indicates the degree to which consistent results are obtained
from two halves of the test.

4. Kuder-Richardson Formula. Administer the test once. Score the total test and apply
the Kuder-Richardson formula. The Kuder-Richardson 20 formula is applicable only
in situations where students’ responses are scored dichotomously, and therefore, is
most useful with traditional test items that are scored as right or wrong, true or false,
and yes or no type. KR-20 formula estimates of reliability provide information whether
the degree to which the items in the test measure is of the same characteristic, it is
an assumption that all items are of equal in difficulty. (A statistical procedure used to
estimate coefficient alpha, a correlation coefficient is given.) Another formula for
testing the internal consistency of a test is the KR-21 formula, which is not limited to
test items that are scored dichotomously.
RELIABILITY COEFFICIENT
Reliability coefficient is a measure of the amount of error associated with the test
scores.

Description of Reliability Coefficient


a. The range of the reliability coefficient is from 0 to 1.0.
b. The acceptable range value 0.60 or higher.
c. The higher the value of the reliability coefficient, the more reliable the overall
test scores.
d. Higher reliability indicates that the test items measure the same thing.
Test -Retest Method
Split-Half Method
Kurder-Richardson Formula

You might also like