Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

Republic of the Philippines

NUEVA VIZCAYA STATE UNIVERSITY


Bayombong, Nueva Vizcaya
INSTRUCTIONAL MODULE
IM NO.: IM-PROFED6-1STSEM-2021-2022

COLLEGE OF TEACHER EDUCATION


Bayombong Campus

DEGREE PROGRAM BSEd; BTLE; BPE COURSE NO. Professional Education 6


SPECIALIZATION A-E; B-C COURSE TITLE Assessment in Learning 1
YEAR LEVEL 3; 2; 3 TIME FRAME 4 Hrs. WK NO. 3-4 IM NO. 3

I. UNIT 1: INTRODUCTION TO THE COURSE

II. LESSON TITLE: DESIRABLE CHARACTERISTICS OF TESTS

III. LESSON OVERVIEW

Tests are to measure what they should measure. They must be aligned with the desired
learning outcomes/objectives as well as the instructional processes. The lessons’ levels of
learning and content must jibe with the tests to measure the attainment of desired competencies.

Through this lesson, the students are expected to determine which practices are
appropriate in terms of test construction, administration and scoring of tests. They should be able
to determine proper and improper practices, and resolve on what to do and not to do when they
become teachers.

IV. DESIRED LEARNING OUTCOMES

Given situations and samples of teacher-made tests, the students must be able to:
1. critique based on the presented characteristics and principles of a good test
2. resolve on what to do and not to do when they become teachers

V. LESSON CONTENT

DESIRABLE CHARACTERISTICS OF TEST


(Quality that every Measurement / Evaluation Device should possess)
1. Validity – the degree to which an evaluation device measures what is intended to measure
- the degree to which a test is capable of achieving certain aims.
- the accuracy of specific predictions made from its scores.
- the extent to which inferences, conclusions and decisions made on the basis of test
scores are appropriate and meaningful (test validity)

Kinds of Validity

a) Content Validity – it shows how well the content of the test samples the classes of situations
or subject matter about which conclusions are to be drawn.
- It should be a representative sample of both the topics and the cognitive process
of a given course or unit.
- All topics must be represented in correct proportion on a test.
- It should measure what the teachers are teaching.
- It should reflect as to what is in the curriculum.
- There is no numerical expression; it is through inspection of items.
- It is important in the case of achievement and proficiency measures.
“In accordance with Section 185, Fair Use of Copyrighted Work of Republic Act 8293, the copyrighted works included in this material may be reproduced for
educational purposes only and not for commercial distribution.”
NVSU-FR-ICD-05-00(081220)
Page 1 of 4
Republic of the Philippines
NUEVA VIZCAYA STATE UNIVERSITY
Bayombong, Nueva Vizcaya
INSTRUCTIONAL MODULE
IM NO.: IM-PROFED6-1STSEM-2021-2022

b) Construct Validity
- It is used to measure the degree to which the individual manifests an abstract
psychological trait or ability.
- Psychological constructs are unobservable. e.g. intelligence, anxiety, aptitude,
motivation
- The measurement of intelligence is an classic example of construct validation.
- An excellent measure of intelligence - Stanford Binet Intelligence Test

c) Criterion-related Validity
- The relationship between test scores and some individual external measures
Predictive Validity
- As to procedure or time, the external measure is collected at a later date; the
evidence is shown later.
- As to purpose, it is used to predict some subsequent measure of performance.

Concurrent Validity
- As to procedure or time, data are collected at approximately the same time as the
test data.
- As to purpose, it asks whether a test score can be substituted for some less
efficient way of gathering criterion data.

Factors that are considered to influence validity of test in general:


1) Appropriateness of test items – thinking skills cannot be measured by measuring
knowledge of facts. A test that is valid for measuring knowledge of facts is invalid for
measuring skills in problem solving.

2) Directions – unclear directions tend to reduce validity. Directions that do not clearly
indicate how the pupils should answer and record their answers affect the validity of the
test items.

3) Reading vocabulary and sentence structures – when reading vocabulary and sentence
structures are too difficult and complicated, the test becomes a test in reading or
intelligence rather than what is intended to measure.

4) Difficulty of items – when the test items are too difficult or too easy, they cannot
discriminate between the bright and the slow pupils. Validity is lowered. When items do
not match the difficulty level specified by the instructional objectives, their validity likewise
reduced.

5) Construction of test items – when items unintentionally provide clues, the test becomes
a test on detecting clues. Also, when items are ambiguous, the test becomes a test on
interpretation. Ambiguous items confuse pupils and they do not reveal a true measure.

6) Length of the test – A test should be of sufficient length to measure what is supposed to
measure. A test that is too short cannot adequately sample the performance we want to
measure.

7) Arrangement of items – test items should be arranged according to difficulty, with the
easiest items first. Difficult items, when encountered early in the test may cause mental
blocks. They may also take up too much of the pupils’ time, thus depriving them of the
opportunity of answering the other items. Improper arrangement also affects pupils’
motivation in answering questions.

“In accordance with Section 185, Fair Use of Copyrighted Work of Republic Act 8293, the copyrighted works included in this material may be reproduced for
educational purposes only and not for commercial distribution.”
NVSU-FR-ICD-05-00(081220)
Page 2 of 4
Republic of the Philippines
NUEVA VIZCAYA STATE UNIVERSITY
Bayombong, Nueva Vizcaya
INSTRUCTIONAL MODULE
IM NO.: IM-PROFED6-1STSEM-2021-2022

8) Pattern of answers – when examinees can detect the pattern of the correct answers like:
True, True, False, False or A, B, A, B, C, D, C, D, they are liable to guess the answers
and this lowers validity.

2. Reliability – refers to consistency.


- It refers to the consistency of scores obtained by the same person when retested by the
same test or by an equivalent form of the test.
- It is the degree to which measurements of content knowledge or cognitive ability are
consistent each time a test is given.

Factors Affecting Reliability


(Reasons for the Inconsistency on an Individual’s Scores)

1) Nature of the test


a) length of the test
b) quality of test items / difficulty
c) objectivity – when personal opinion is eliminated

2) Conditions under which the test is administered


a) physical condition – room temperature, lighting, seating arrangement
b) psychological factors – emotional stress, fatigue, practice, moods., mental set, level
of motivation.
c) distractions and accidents – breaking a pencil, defective test paper, cheating, and
attitude of the person giving the test.
d) scoring inaccuracy – bias, mood of the scorer.

3) Poor Sampling – inclusion of certain items and exclusion of others.

3. Administrability / Ease of Administration

- a good test can be administered with ease, clarity and uniformity: there must be ease in
giving and taking the exam and in correcting and scoring.

- to secure uniformity:
- test procedures must be standardized (provide detailed directions for administering
the test, give time limits, and oral instructions);
- testing conditions must be controlled

- to ensure administrability:
- directions must be simple, clear and concise;
- types of test are introduced by sample items;
- for the test format, there must be no difficulty in reading, recording answer, and
moving from one part to the next;
- the size of the page, length and size of line and illustrations facilitate test
administration.

4. Scorability / Ease of Scoring


- A good test is easy to score such that the test results are easily available to both the student
and the teacher; thus, proper remedial and follow-up measures and curricular adjustments
can be made.
- Tests are easy to score when:
- directions for scoring are clear;
- scoring key is simple;
- provisions foe answer sheets are made;
- machine scoring or stencil scoring is possible.
“In accordance with Section 185, Fair Use of Copyrighted Work of Republic Act 8293, the copyrighted works included in this material may be reproduced for
educational purposes only and not for commercial distribution.”
NVSU-FR-ICD-05-00(081220)
Page 3 of 4
Republic of the Philippines
NUEVA VIZCAYA STATE UNIVERSITY
Bayombong, Nueva Vizcaya
INSTRUCTIONAL MODULE
IM NO.: IM-PROFED6-1STSEM-2021-2022

5. Interpretability
- Test results can be useful only when they are properly evaluated (& they can only be
evaluated after they are interpreted).
- If they are interpreted correctly and are applied, they can be useful in sound educational
decisions.

6. Economy
- Test should be economical; economize both time and money.
- use answer sheets
- reuse tests? No.
- Test validity and reliability should not be sacrificed for economy.

7. Utility
- It must serve definite need in the situation in which it is used
- Make use of the result to improve the pupil’s ability.

VI. LEARNING ACTIVITY

Group Activity: From your readings and the virtual lectures, work on the following:

1. Given a sample of a teacher-made test, critique it using the desirable characteristics of test
as your guide, particularly on validity, and administrability (particularly on format). Use the
given template (Activity No. 3.1 and 3.2).

VII. ASSIGNMENT

Group Activity: Reflect on your experiences of being tested in class, then work on the
following:

1. In your group, share your positive / negative experiences on test validity, format,
administration and scoring. Consolidate your experiences using the given template (Activity
No. 3-3).
.
2. Make a resolution on becoming a future responsible and conscientious (matino) teacher
(Activity No. 3.4).

VIII. EVALUATION

Online quiz

IX. REFERENCES

Calmorin, L.P. (2004). Measurement and evaluation. (3 rd ed). Mandaluyong City: National Book
Store.
CHED Memorandum Order No. 46, s. 2012 – Policy Standard to Ensure Quality Assurance (QA)
in Philippine Higher Education through an Outcome-Based and Typology-Based QA
Corpuz, B.B. (2015). Field study 5: Learning assessment strategies. Quezon City: Lorimar
Publishing.
Teaching Guide in Assessment of Student Learning 1
Other references indicated in the Course Outline

“In accordance with Section 185, Fair Use of Copyrighted Work of Republic Act 8293, the copyrighted works included in this material may be reproduced for
educational purposes only and not for commercial distribution.”
NVSU-FR-ICD-05-00(081220)
Page 4 of 4

You might also like