Download as pdf or txt
Download as pdf or txt
You are on page 1of 2

ASSESSMENT OF LEARNING

DEFINITION OF TERMS
ASSESSMENT Process of gathering, describing or quantifying information about student performance. May include paper and
pencil test, extended responses and performance assessment are usually referred to as “authentic assessment”
tasks
MEASUREMENT A process of obtaining a numerical description of the degree to which an individual possesses a particular
characteristic. It answers the question “How much?”
EVALUATION Process of examining the performance of student. It also determines whether or not the students has met the lesson
instructional objectives.
TEST Instrument or systematic procedure designed to measure the quality, ability, skill or knowledge of students by giving
a set of questions in a uniform manner. It answers the question “How does an individual student perform?”
TESTING Method used to measure the level of achievement or performance of the learners. It also refers to the
administration, scoring and interpretation of an instrument designed to elicit information about performance in a
sample of a particular area of behavior.
TYPES OF MEASUREMENT
NORM REFERENCED CRITERION REFERENCED
 Designed to measure the performance of a student compared with  Designed to measure the performance of students with respect to
other students. some particular criterion or standard.
 Each individual is compared with other examinees and assigned a  Each individual is compared with a pre-determined set of standards for
score, usually expressed as a percentile, a grade equivalent score or acceptable achievement. The performance of the other examinees is
a stanine. The achievement of student is reported for broad skill irrelevant. A student’s score is usually expressed as a percentage and
areas, although some norm-referenced tests do report student student achievement is reported for individual skills.
achievement for individual.  The purpose is to determine whether each student has achieved
 The purpose is to rank each student with respect to the achievement specific skills or concepts and to find out how much students know
of others in broad areas of knowledge and to discriminate high and before instruction begins and after it has finished.
low achievers.  AKA Objective-referenced, domain-referenced, content referenced
and universe referenced
DIFFERENCES
Typically covers a large domain of learning tasks, with just a few items Typically focus on a delimited domain of learning tasks, with a relative
measuring each specific task. large number of items measuring each specific task
Emphasize discrimination among individuals in terms of relative of level Emphasize what individuals can and cannot perform
of learning
Favor items of large difficulty and typically omits very easy and very hard Match item difficulty to learning tasks, without altering item difficulty or
items omitting easy or hard items
Interpretation requires a clearly defined group Interpretation requires a clearly defined and delimited achievement
domain
TYPES OF ASSESSMENT
Placement Assessment Diagnostic Assessment Formative Assessment Summative Assessment
 Concerned with the entry  Type of assessment given  Used to monitor the learning  Usually given at the end of a
performance of student. before instruction. progress of the students during course or unit.
 Purpose is to determine the  Aims to identify the strengths or after instruction.  Purposes:
prerequisite skills, degree of and weaknesses of the  Purposes: 1. Determine the extent to
mastery of the course objectives students regarding the topics to 1. Provide feedback which the instructional
and the best mode of learning. be discussed. immediately to both student objectives have been met
 Purposes: and teacher regarding the 2. Certify student mastery of
1. Determine level of success and failures of the intended outcome and
competence of the learning. used for assigning grades
students 2. Identify the learning errors 3. Provide information for
2. Identify students who that is in need of correction judging appropriateness of
already have knowledge 3. Provide information to the the instructional objectives
about the lesson teacher for modifying 4. Determine the
3. Determine the causes of instruction to improve effectiveness of instruction.
learning problems to learning.
formulate a plan for
remedial action.
MODES OF ASSESSMENT
 Assessment in which students typically select answer or recall information to complete the
assessment. Tests may be standardized for teacher-made, and these tests may be multiple
choice, fill in the blanks, matching type.
 Indirect measures of assessment since the test items are designed to represent competence by
Traditional Assessment extracting knowledge and skills from their real-life context.
 Items on standardized instrument tend to test only the domain of knowledge and skill to avoid
ambiguity to the test takers.
 One-time measures are used to rely on a single correct answer to each item. There is a limited
potential for traditional test to measure higher order thinking skills.
 Assessment in which students are asked to perform real-world tasks that demonstrate meaningful
application of essential knowledge and skills
 Direct measures of student performance because tasks are designed to incorporate contexts,
problems, and solution strategies that students will use in real life.
 Designed ill-structured challenges since the goal is to help students prepare for the complex
Performance Assessment ambiguities in life.
 Focus on processes and rationales. There is no single correct answer; instead, students are led
to craft polished, thorough and justifiable responses, performances and products.
 Involve long-range projects, exhibits and performances are linked to the curriculum
 The teacher is an important collaborator in creating tasks, as well as in developing guidelines for
scoring and interpretation.
 A collection of student’s works, specifically selected to tell a particular story about the student.
 A portfolio is not a pile of student work that accumulates over a semester or year.
Portfolio Assessment  A portfolio contains a purposefully selected subset of student work.
 It measures the growth and development of students
FACTORS TO CONSIDER: GOOD TEST ITEM
VALIDITY RELIABILITY ADMINISTRABILITY SCORABILITY
Degree to which the test measures Consistency of scores obtained by Test should be administered Test should be easy to score.
what it intends to measure. It is the the same person when retested uniformly to all students so that the Directions for scoring should be
usefulness of the test for a given using the same instrument or one scores obtained will not vary due clear. The test developer should
purpose. A valid test is always that is parallel to it. to factors other than differences of provide the answer sheet and the
reliable the student’s knowledge and skills. answer key.
There should be a clear provision
for instruction for the students,
proctors and the scorer.
APPROPRIATENESS ADEQUACY FAIRNESS OBJECTIVITY
Mandates that the test items that The test should contain a wide Mandates that the test should not Represents the agreement of two
the teacher construct must assess sampling of items to determine the be biased to the examinees. It or more raters or a test
the exact performances called in educational outcomes or abilities should not be offensive to any administrators concerning the
for in the learning objectives. The so that the resulting scores are examinee subgroups. A test can score of a student. If the two raters
test items should require the same representative of the total only be good if it is also fair to all who assess the same student on
performance of the student as performance in the areas test takers. the same test cannot agree on the
specified in the learning objectives measured. score, the test lacks objectivity,
and the score of neither the judge
is valid, thus, lack of objectivity
reduces test validity in the same
way that the lack of reliability
influences validity.

You might also like