Download as pdf or txt
Download as pdf or txt
You are on page 1of 92

Chapter No: 01

Introduction to Educational Assessment


Test:
Test means examination, investigation or check.
• Testing is a process to find out how well something works.
• Testing tells what level of knowledge or skills have been acquired.
• Testing is a set of questions, used to check the abilities, aptitudes, skills, or
performance of an individual or group.
• Testing is an act of given students a examination to determine what they know.
• In testing a test or quiz is used to examine someone's knowledge to determine what he or
she knows or has learned. Testing measures the level of skill or knowledge that has
been reached.
There are different types of tests e.g Objective Type test, Subjective type test, Achievement
Test, Aptitude test, Attitude test, intelligence test, written test oral test etc
In sum, one should view testing as a bridge-building process between teaching and learning
and classroom tests as mirrors in which teachers and students see their reflections clearly.

The meaning of the word test came from the 14th Century practice of using fire to heat a small
earthen pot called testum (Latin) in order to determine the quality of a precious metal. In this
way, the process of testing uncovers the quality of a student's academic achievement.

Assessment
Assessment is process of documenting and measuring the performance of the students.
• The term assessment refers to the wide variety of methods or tools that educators use
to evaluate, measure, and document the academic readiness, learning progress, skill
acquisition, or educational needs of students.
• Assessment is the systematic process of collecting, analyzing and interpreting of
students’ academic data to know what extent students have achieved educational
objectives
• Assessment is the systematic collection, analysis, and use of information about
educational programs undertaken for the purpose of improving student learning and
development.
• In an educational perspective, assessment is the process of describing,
collecting, recording, scoring, and interpreting information about learning.
• Assessment is Knowing what Students Know.

Assessment Steps:
1. Develop learning objectives.
2. Check for alignment between the course and the objectives.
3. Develop an assessment plan.
4. Collect assessment data.
5. Use results to improve the program.
6. Regularly examine the assessment process and correct, as needed.
Measurement
To find out the size, length, width, weight of something by comparing it with a standard unit is
called measurement.
In Education Measurement is assigning the number to the performance of students by adopting
a specific rules and standards. Teacher takes a test after marking assigns the numbers to the
students’ performance by using a specific key this is also educational measurement.
Measurement is assigning the numbers to someone’s performance. Actually it is the numerical
presentation of students’ achievements, knowledge, skills, attributes attitudes and activities.
Evaluation:
Evaluation means to judge some ones performance correct and rightly. The term evaluation is
closely related to measurement. It is more inclusive than the term measurement. Evaluation is a
very important part in the educational system. Due to evaluation the teachers as well students
come to know about the actual position/performance of instructional process. Evaluation
provide feedback to improve the deficiencies of the learners in the whole learning process.
According to Norman E and Rebert L.

“ Evaluation is a systyematic process of collecting, analyzing


and ionterpretting information to determine the extent
to which the pupils are achieving instructional objectives.”

Evaluation includes the both quantitative and qualitative description along with
value judgment. So evaluation adds value judgment to describe the quality of data.
Principles of Evaluation:
As evaluation is systematic process it must be carried out effective techniques and principles
which are as:
• It must be clearly stated what is to be evaluated.
• A variety of evaluation techniques should be used for a comprehensive evaluation.
• An evaluator should know the limitation of different evaluation techniques.
• The techniques of evaluation must be appropriate and valid.
• Evaluation gives the value judgment.
Assessment of and Assessment for Learning.
Assessment of learning
The assessment that is conducted to check the progress of learning. Often it is conducted after
the completion of learning process. Assessment of learning refers to strategies designed to
confirm what students know, demonstrate learning outcomes or the goals of programs. It
certifies ability and makes decisions about students’ future placements. It is designed to
provide evidence of achievement to parents, other educators, the students themselves, and
sometimes to outside groups (e.g., employers, other educational institutions). Assessment of
learning is the assessment that becomes public and results in statements or symbols about
how well students are learning. This assessment is accompanied by a number or letter grade
(summative) e.g. 18 out of 20 or grade A etc.
• It compares one student's achievement with standards
• Its results can be communicated to the student and parents
• It occurs at the end of the learning unit
• The purpose of assessment of learning is to measure, certify, and report the level of
students ‘learning, so that reasonable decisions can be made about students.
• Assessment of learning requires the collection and interpretation of information about
students ‘accomplishments.
Assessment for learning
Assessment for learning is best defined as a process by which assessment information is
used by teachers to modify their teaching strategies, and by students to amend their learning
strategies. Assessment, teaching, and learning are intimately linked, as each informs the
others.
Assessment for learning
• comprises two phases—
1. initial or diagnostic assessment and
2. formative assessment

• plan and modify teaching and learning programmes for individual students, groups of
students, and the class as a whole
• pinpoint students’ strengths so that both teachers and students can build on them
• identify students’ learning needs in a clear and constructive way so they can be addressed
• involve parents, families in their children's learning.
For students
Assessment for learning provides students with information and guidance so they can plan
and manage the next steps in their learning. Assessment for learning uses information to lead
from what has been learned to what needs to be learned next.
Assessment for learning should use a range of approaches. These may include:
• day-to-day activities, such as learning conversations
• a simple mental note taken by the teacher during observation
• student self and peer assessments
• a detailed analysis of a student’s work
• assessment tools, which may be written items, structured interview questions, or items
teachers make up themselves.
• assessment can be based on a variety of information sources (e.g., portfolios, works in
progress, teacher observation, conversation)
• verbal or written feedback to the students.
• no grades or scores are given
• occurs throughout the learning process.
So Assessment for learning (AFL) is a method for teaching and learning that creates feedback
which is then used to improve students’ performance. Students become more involved in the
learning process and from this gain confidence in what they are expected to learn and to what
standard.
AFL involves students becoming more active in their learning and starting to ‘think like a
teacher’. They think more actively about where they are now, where they are going and how
to get there.
Principles of Assessment
1. Assessment should be valid it means assessment tasks effectively assess student
attainment of the intended learning outcomes at the appropriate level.
2. Assessment should be reliable and consistent.
3. Information about assessment should be clear, accessible and transparent Perfect,
accurate, and procedures should be made available to students, staff and other external
assessors or examiners.
4. Assessment should be comprehensive and reasonable. As far as is possible without
compromising academic standards, wide-ranging and unbiased assessment should
ensure that tasks and procedures do not disadvantage any group or individual.
5. Assessment should be an essential part of programme and should relate directly to
the programme aims and learning outcomes.
6. The amount of assessed work should be manageable. The scheduling of assignments
and the amount of assessed work should be usable without overloading staff or
students.
7. Timely feedback that promotes learning and facilitates improvement should be an
integral part of the assessment.
8. It must be clearly stated what is to be assessed
9. A variety of assessment techniques should be used for a comprehensive assessment: It
is not possible to assess all the aspect of achievement with the help of a single
technique. For the better assessment the techniques like objective tests, essay tests,
observational techniques etc. should be used. So that a complete’ picture of the pupil
achievement and development can be assessed.
10. An assessor should know the limitations of different assessment techniques:
Assessment can be done with the help of simple observation or highly developed
standardized tests. But whatever the instrument or technique may be it has its own
limitation.
11. The technique of assessment must be appropriate for the characteristics or performance
to be measured: Every assessment technique is appropriate for some uses and
inappropriate for another. Therefore while selecting an assessment technique one must
be well aware of the strength and limitations of the techniques.
Classification of Assessment
Assessment is often classified into following forms.
Placement assessment
Placement assessment is used to determine students’ performance at the beginning of the
process. Placement assessment is used to place students according to prior achievement
through placement testing, i.e. the tests that colleges and universities use to assess admission
criteria/standard and place students into their initial classes. Placement assessment, also
referred to as pre-assessment or initial assessment, as it is conducted prior to the instruction
to establish a baseline (starting point) from which individual student growth can be measured.
It also concerned with the students entry performance. This type of assessment also helps to
place right person at the right place. This type of assessment is used to know what the student's
skill level is about the subject. It helps the teacher to explain the material more efficiently.
Formative assessment
Formative assessment is generally carried out during a process/course/project. This type of
assessment is used to monitor the learning progress of instructional process. Formative
assessment also referred to as "educative assessment," because it is used to aid/help learning.
In an educational setting, formative assessment might be a teacher (or peer) or the learner,
providing feedback on a student's work and would not necessarily be used for grading
purposes. It helps both students and teacher to identify the immediate learning difficulties. It
can only be used to improve the learning process, but it can’t be used to assign course grades.
The formative assessments aim to see if the students understand the instruction before doing
a summative assessment.
Summative assessment
Summative assessment is generally carried out at the end of a course/process or project. In an
educational setting, summative assessment is used to assign grades to the students’
performance. Summative assessment is used to summarize what the students have learned.
This type of assessment is typically graded (e.g. pass/fail, 0-100) and can take the form of
tests, exams or projects. Summative assessments are often used to determine whether a
student has passed or failed a class.
Diagnostic assessment
Diagnostic assessment is conducted to diagnose learning difficulties that occurs during the
learning process. This type of assessment is used when corrective measures prescribed on the
basis of formative assessment fail to solve the problem. Diagnostic assessment is much more
comprehensive and detailed than the formative assessment.

Purpose of Assessment:
The main purpose of teaching learning process is to enable the pupil to achieve intended
learning outcomes. In this process the learning objectives are fixed then after the instruction
learning progress is assessed by tests and other assessment devices. The purpose of
assessment process can be summarized as following:
1. Assessment helps in preparing instructional objectives:
Learning outcomes expected from class-room discussion can be fixed by using assessment
results. What type of knowledge and understanding the student should develop? What skill
they should display? What interest and attitude they should develop? Can only be possible
when we shall identify the instructional objectives and state them clearly in terms of intended
learning outcomes. Only a good assessment process helps us to fix up a set of perfect
instructional objectives.
2. Assessment process helps in assessing the learner’s needs:
In the teaching learning process it is very much necessary to know the needs of the learners.
The instructor must know the knowledge and skills to be learned by the students. Assessment
helps to know whether the students possess required knowledge and skills to continue the
instruction.
3. Assessment help in providing feed back to the students:
An assessment process helps the teacher to know the learning difficulties of the students. It
helps to bring about an improvement in different school practices. It also ensures an ap-
propriate follow-up service.
4. Assessment helps in preparing programmed materials:
Programmed instruction is a continuous series of learning sequences. First the instructional
material is presented in a limited amount then a test is given to response the instructional
material. Next feedback is provided on the basis of correctness of response made. So that
without an effective assessment process the programmed learning is not possible.
5. Assessment helps in curriculum development:
Curriculum development is an important aspect of the instructional process. Assessment data
enable the curriculum development, to determine the effectiveness of new procedures,
identify areas where revision is needed. Assessment also helps to determine the degree to
what extent an existing curriculum is effective. Thus assessment data are helpful in
constructing the new curriculum and evaluating the existing curriculum. So this is the
important purpose of assessment.
6. Assessment helps in reporting pupil’s progress to parents:
A systematic assessment procedure provides a comprehensive picture of each pupil’s
progress. This comprehensive nature of the assessment process helps the teacher to report of
the pupil to the parents. This type of information about the pupil provides the foundation for
the most effective co-operation between the parents and teachers.
7. Assessment data are very much useful in guidance and counseling:
Assessment procedures are very much necessary for educational, vocational and personal
guidance. In order to assist the pupils to solve their problems in the educational, vocational
and personal fields the counsellor must have an objective knowledge of the pupils abilities,
interests, attitudes and other personal characteristics. An effective assessment procedure helps
in getting a comprehensive picture of the pupil which leads to effective guidance and of
counselling. This is the important purpose of assessment.
8. Assessment helps in effective school administration:
Assessment data helps the administrators to judge the extent to which the objectives of the
school are being achieved, to find out strengths and weaknesses of the curriculum and
arranging special school programmes. It also helps in decisions concerning admission,
classes, programs and promotion of the students.
9. Assessment data are helpful in school research:
In order to make the school programme more effective, researches are necessary.
Assessment data help in research areas like comparative study of different curricula,
effectiveness of different methods, effectiveness of different organisational plans, etc. So all
above are the purposes of assessment.

Interpretation of Results
Interpretation of results is important to know the position and performance of a student.
Results are interpreted so that information regarding results can be understood easily.
What is a test score?
A test score is a piece of information, usually a number, that conveys the performance of an
examinee on a test.
Test scores are interpreted with a
1. norm-referenced or
2. criterion-referenced interpretation,
A norm-referenced interpretation means that the score conveys meaning about the
examinee with regards to their standing among other examinees. In other words in norm
referenced interpretation we compare the performance of on students with reference to other
student in the same group. e.g 1st position in the class, 3rd position or 10th position in the
class etc.
A criterion-referenced interpretation means that the score conveys information about the
examinee with regard to a specific criterion, regardless of other examinees' scores. In this
interpretation we compare the performance of students against the pre determine criteria. e.g
if gets 33% or 50% then pass otherwise fail etc.
Percentage (%) In percentage the result is interpreted the show the position out of hundred.
Average: in average result is interpreted to show the central position of the score.
Uses of Results Interpretation.
• It is Students and parents need to interpret the scores and any teacher comments they
received. It shows students mastery of learning, as well as areas of learning that need
to be developed.
• Teachers need to interpret both student and class wide results to inform instructional
practices (who needs remediation; how well the class is progressing toward mastery
of the standards; how effective instruction has been).
• Professional learning community team members interpret test data to learn how well
the students have progressed in mastering the standards. By comparing class scores on
an item-by-item basis, team members also learn where curriculum and instructional
practices may be weak (if all classes scored poorly on an item). If one class scored
consistently higher than other classes, the teacher of that class can share instructional
techniques that helped the students.
• School wide teacher teams use test results to assess overall growth and areas
needing improvement as they make plans for the next year.

Teacher made VS Standardized test.


Standardized Test
A standardized test is a test that is administered and scored in a consistent/standard manner.
Standardized tests are designed in such a way that the questions, conditions for administering,
scoring procedures, and interpretations are consistent and uniform. Any test in which the same
test is given in the same manner to all test takers, and graded in the same manner for everyone,
is a standardized test.
Followings are the important point of standardized test.
1. Generally prepared by specialists who know very well the principles of test
construction;
2. Prepared very carefully following principles of test construction;
3. Given to a large proportion of the population for which they are intended for
the computation of norms;
4. Has proper validity and reliability.
5. Generally are high objective;
6. Measures innate and actual capacities and characteristics as achievement;
7. Accompanied by manuals of instructions on how to administer and score the
tests and how to interpret the results;
8. Generally copyrighted.
Examples of Standardized Tests: TOEFL (Test of English as Foreign Language, TOEIC
(Test of English for International Communication), IELTS (International English
Language Testing System), GMAT (Graduate Management Admission System), etc.
Teacher-made test
Teacher made test is prepared by teacher to assess the performance of the
students according to teacher, students, school and class demand.
Followings are the important point of teacher made test.
1. Made by teachers according to teacher will.
2. Often prepared hurriedly and haphazardly
3. Usually given only to a class or classes for which the tests are planned;
4. Teacher-made tests are not subjected to any statistical procedures to determine
their validity and reliability;
5. May be objective and may essay in which case scoring is subjective;
6. Have no norms unless the teacher computes the median, mean, and other
measures for comparison and interpretation;
7. Generally measure subject achievement only;
8. Intended to be used only once or twice to measure achievement of students in
a subject matter studied during a certain period;
9. Do not have manuals of instructions, only the directions for the different types
of tests which may be given orally or in writing.
10. Not copyrighted.
Chapter No: 02
Planning Classroom Assessment
Instructional Aims
Aims: aim means target, ambition, goal, purpose, and wish. It also meant

point at a target

direct at someone or something

try to achieve something

A purpose or intention.
Aim gives shape and direction to the plans for the future. Aims are just a starting point and
represent as an idea, an aspiration (ambition) and direction. They act as guide to action and
provide a general framework for the overall educational process. They are relatively few in
number but are broad in scope. Aims are inspirational and visionary in character and
permanently open-ended (having no predetermined limit). For example the concept of
“Good Life”or” Educated Citizens” admit of several interpretations. Aim has to be analyses
and broken into its constitutional parts for accomplishment. According to Bloom. (1971).

“Aims just provide overall direction and guidance to a school


system; but they are not always helpful to teachers in classroom
instruction. They are basically meant to provide direction to
policy- makers at different levels local, provincial or national.”
Educational aims refer to overall purpose of education, these represents the needs and
aspiration of society.
Aims are general statements that provide direction to educational action. Aims are usually
written in shapeless terms using words like: learn, know, understand, appreciate, and these
are not directly measurable. Aims may serve as organizing principles of educational direction
for more than one grade. Indeed these organizing principles may encompass the range of
educational direction for entire programs.
Goals are statements of educational purpose which are more specific than aims. Goals too
may cover an entire program, subject area, or multiple grade levels. They may be in either
unstructured language or in more specific behavioral terms.
Goals are broad, generalized statements about what is to be learned. Think of them as a
target to be reached, or "hit."
Instructional objectives describe the skills, knowledge, abilities or capabilities that students
should possess after they complete the instruction. The starting point for designing a course
of study should include these instructional objectives; the objectives determine the intended
outcomes of the instruction. Good instructional objectives describe an observable
performance so instructional objective:
• Describe a skill that students are expected to possess after instruction
• Describe a measurable performance
• Objectives are usually specific statements of educational target.
• Objective can be entertained at class level and school level.
• Objectives Usually are considered to be specific in nature, written in terms of what
students will know, be able to do at the end of the instruction.
• Objectives are the foundation upon which you can build lessons and assessments that
you can meet your overall course/lesson goals.
• Think of objectives as tools you use to make sure you reach your goals. They are
the arrows you shoot towards your target (goal).

GENERAL RULES FOR STATING SPECIFIC OBJECTIVES


• Instructional Objectives should be stated in terms of learner’s performance and not
teacher’s performance
• The Objective should specify what the learner will be able to do at the end of the
lesson.
• Use an Action Verbs
• Use verbs that refer to any observable activity displayed by a learner
• State in terms of learning outcome instead of the learning process
• Specify the standards of minimum acceptable performance
• An objective should not consist of more than one learning outcome
Taxonomy of Educational objectives
The idea of creating a taxonomy of educational objectives was given by Benjamin Bloom in
the 1950s, the assistant director of the University of Chicago's Board of Examinations. He
believed this could be facilitated by developing a framework into which objective could be
classified. So educational objectives can be divided into following Three main
categories/domain
1. Cognitive Domain
2. The Affective Domain
3. Psychomotor Domain
Cognitive Domain
Cognitive objectives are designed to increase an individual's knowledge. cognitive
objectives, originated by Benjamin Bloom and collaborators in the 1950's.
Bloom describes following categories of cognitive learning.

Starting with basic factual knowledge, the categories progress through comprehension,
application, analysis, synthesis, and evaluation.
• Knowledge - Remembering or recalling information.
• Comprehension - The ability to obtain meaning from information.
• Application - The ability to use information.
• Analysis - The ability to break information into parts to understand it better.
• Synthesis - The ability to put materials together to create something new.
• Evaluation - The ability to check, judge, and critique materials.
1. Knowledge
basic facts
2. Comprehension
translation, interpretation, extrapolation of information
3. Application
using knowledge
4. Analysis
break whole of knowledge into parts and distinguishing its separate elements
relationships and organizational elements
5. Synthesis
putting parts together into a new form
6. Evaluation
judgments based on logical consistency

Affective Domain ~ Values


Affective objectives are designed to change an individual's attitude, choices, and
relationships. Krathwohl and Bloom created a taxonomy for the affective domain that lists
levels of commitment (indicating affect) from lowest to highest.

The levels are described as follows:


Affective Domain Hierarchy
Level Definition Example
Being aware of or attending to Individual reads a book passage
Receiving
something in the environment. about civil rights.
Individual answers questions about
Showing some new behaviors as a the book, reads another book by the
Responding
result of experience. same author, another book about
civil rights, etc.
The individual demonstrates this by
Valuing Showing some definite involvement voluntarily attending a lecture on
or commitment. civil rights.
Integrating a new value into one's
general set of values, giving it some The individual arranges a civil
Organization
ranking among one's general rights rally.
priorities.
The individual is firmly committed
Characterization Acting consistently with the new to the value, perhaps becoming a
by Value value. civil rights leader.

Here are key verbs for each level you can use when writing affective objectives:
Key Verbs for the Affective Domain
Receiving Responding Valuing Organization Characterization
• complete • codify
• comply • accept • discriminate
• accept
• cooperate • defend • display
• attend • internalize
• discuss • devote • order
• develop • verify
• examine • pursue • organize
• recognize
• obey • seek • systematize
• respond • weigh

1. Receiving
awareness, willingness to receive, selected attention
2. Responding
willing responses, feelings of satisfaction
3. Valuing
acceptance, preference, commitment
4. Organization
conceptualization of values, organization of a value system
5. Characterization
reflects a generalized set of values, a philosophy of life
Psychomotor Domain ~ Skills
Psychomotor objectives are designed to build a physical skill. This domain is characterized
by progressive levels of behaviors from observation to mastery of a physical skill. Simpson
(1972) built this taxonomy on the work of Bloom and others:
1. Perception - Sensory cues guide motor activity.
2. Set - Mental, physical, and emotional dispositions that make one respond in a certain
way to a situation.
3. Guided Response - First attempts at a physical skill. Trial and error coupled with
practice lead to better performance.
4. Mechanism - The intermediate stage in learning a physical skill. Responses are
habitual with a medium level of assurance and proficiency.
5. Complex Overt Response - Complex movements are possible with a minimum of
wasted effort and a high level of assurance they will be successful.
6. Adaptation - Movements can be modified for special situations.
7. Origination - New movements can be created for special situations.

• Imitation - Observing and copying someone else.


• Manipulation - Guided via instruction to perform a skill.
• Precision - Accuracy, proportion and exactness exist in the skill performance without
the presence of the original source.
• Articulation - Two or more skills combined, sequenced, and performed consistently.
• Naturalization - Two or more skills combined, sequenced, and performed consistently
and with ease. The performance is automatic with little physical or mental exertion.
Harrow (1972) developed this taxonomy. It is organized according to the degree of
coordination including involuntary responses and learned capabilities:
• Reflex movements - Automatic reactions.
• Basic fundamental movement - Simple movements that can build to more complex
sets of movements.
• Perceptual - Environmental cues that allow one to adjust movements.
• Physical activities - Things requiring endurance, strength, vigor, and agility.
• Skilled movements - Activities where a level of efficiency is achieved.
The following list is a synthesis of the above taxonomies:
Psychomotor Domain Hierarchy
Level Definition Example
Active mental attending of a The learner watches a more experienced person.
Observing physical event. Other mental activity, such as reading may be a
pert of the observation process.
The first steps in learning a skill. The learner is
Attempted copying of a observed and given direction and feedback on
Imitating
physical behavior. performance. Movement is not automatic or
smooth.
The skill is repeated over and over. The entire
Practicing Trying a specific physical sequence is performed repeatedly. Movement is
activity over and over. moving towards becoming automatic and
smooth.
The skill is perfected. A mentor or a coach is
Fine tuning. Making minor often needed to provide an outside perspective
Adapting adjustments in the physical on how to improve or adjust as needed for the
activity in order to perfect it. situation.
Here are key verbs for each level you can use when writing psychomotor objectives:
Key Verbs for the Psychomotor Domain
• bend • grinds • organizes
• calibrates • handle • perform (skillfully)
• constructs • heats • reach
• differentiate • manipulates • relax
(by touch) • measures • shorten
• dismantles • mends • sketches
• displays • mixes • stretch
• fastens • operate • write
• fixes
• grasp

Developing Assessment Frame Work


Developing assessment frame work is included on the whole assessment procedure, process,

techniques, methods, time period and types of tests etc. It also clears the all relevant aspects

of assessment procedure. Developing assessment frame is a significant picture of test types,

total numbers, and methods of administration. Developing assessment frame covers the all

relevant steps by describing their rules, strategies and code of conduct and it also finalizes

the whole process of assessment. It provides complete structure of assessment. It guides the

test maker to construct an appropriate test. It also describes the complete data about the

learner who is assessed such as class, subject, level, term, time duration, purpose etc.

Developing assessment frame may include:


I. Objectives of
assessment II. Procedures
III.Content included
IV. Types of test
V. Test specification
VI. Principles/ rules
VII. Scoring procedure
VIII. Vlll. Grading process

Developing test specification


Developing test specification is the most Important task in planning of the classroom test.
It acts as a guide for test construction, Table of test specification is three-dimensional chart
showing
List of Instructional Objectives, Content Areas and Types of Items.
So, it includes the following steps:
1. Determining the weightage of different instructional objectives
2. Determining the weightage of different content areas.
3. Determining the types of items to be included.
Now we discuss the steps one by one:
1. Determining the weightage of different instructional objectives:
There are a vast variety of instructional objectives. We cannot include all in a single test.
In a written test we can't measure the psychomotor domain and affective domain. We can
only measure the cognitive domain. It is also true that all the subject do not contain
different learning objectives like knowledge, understanding, application and skill in equal
proportion.
Therefore, it must be planned how much weightage to be given ta different instructional
objectives. While deciding this we must keep in mind the importance of the particular
objectives for that subject.
For Example, if we have to prepare a test in General Science for Class-10th, we may give
the weightage to different instructional objectives as following:

Weightage given to different instructional objectives in a test of 100 marks:


No of Questions
Instructional Objectives Weightage in %
(Marks)
Knowledge 20% 20

Understanding 40

Application 40 40
2. Determining the weightage of different content areas.
The second step in preparing the table of specification is to determine the content area.
It indicates the area in which the students are expected to show their performance. It
helps to prevent repetition or omission of any unit. Now question arises how much
weightage should be given to each unit. Some expert says that, it should be decided by
the concerned teacher keeping the importance of the chapter in mind. Generally, it is
decided on the basis of pages of the topic, total pages in the book and number of items
to be prepared.
For example, if a test of 100 marks is to be prepared then the weightage to different
topics will be given as following:

Weightage of the Topic = Total Number of Items/Marks x Number of Pages in the topic
Total Number of Pages in the Book
If a book contains total pages = 250
Total test items to be constructed = 100
Topic included in test contain pages =25
Then weightage to this topic will be given as: 100 ><25 = 10
250
So, in the test of 100 total items, the weightage of the topic containing 25 pages, will be
10 items.
3. Determining the types of items to be included
The third important step in preparing table of specification.is to decide appropriate types
of item. Item used in test construction can generally be divided into two types like:
I. Objective type items
Il. Essay type items
Appropriate item types should be selected according to learning outcomes to be
measured. For example, when the learning outcome is writing, naming then supply type
items are useful. If the outcome is identifying a correct answer then selection type
items are useful. So that the teacher must decide and select appropriate item types as
per the learning out-comes.
4. Preparing the three-way Chart:
Preparation of the three-way chart is last step in preparing table of specification. This
chart relates the instructional objectives to the content area and types of item. In a table of
specification, the
• instructional objectives are listed across the top of the table,
• content areas are listed down the left side of the table and
• under each instructional objective the types of items are listed content-wise.
As a model of Table of specification is presented he
Selecting appropriate type of test items:
Selection of types of test item should be based on the types of outcomes you are trying
to assess. Certain item types such as true/false, supplied response, and matching, work
well for assessing lower-order outcomes (i.e.: knowledge or comprehension goals), while
other item types such as essays, performance assessments, and some multiple-choice
questions, are better for assessing higher-order outcomes (i.e., analysis: synthesis. or
evaluation goals). According to the objectives, it may be useful to create a test blueprint
(table of specification) that specifies your
l. Outcomes and the
2. Types of items you plan to use to assess those
outcomes. 3, Difficulty level of items.
On your test blueprint (table of specification), you may wish to assign lower difficulty
level to items that assess lower-order skills (knowledge, comprehension) and higher
difficulty level to items that assess higher-order skills (synthesis, evaluation).
Following are some important points that should keep in mind during selecting
appropriate type of test items:
Test users should select tests that meet the intended purpose and that are
appropriate for the intended test takers.
Define the purpose for testing, the content and skills to be tested, and the intended
test takers. Select and use the most appropriate test based on a thorough review of
available information.
Select type of tests that can provide clear, accurate, and complete required
information.
Select tests prepared by those test makers who have appropriate knowledge, skills,
and training.
Evaluate evidence of the technical quality of the test provided by the test
developer.
Evaluate representative samples of test questions or practice tests, directions,
answer sheets, manuals, and score reports before selecting a test.
Evaluate procedures and materials used by test developers, as well as the resulting
test, to ensure that possibly aggressive content or language is avoided.
Select tests with appropriately modified forms or administration procedures for
test.
Chapter No: 03
Type of Achievement Test
Subjective VS Objective:
Subjective information or writing is based on personal opinions, personal interpretations,
personal points of view, personal emotions and judgment. So, it based on individual opinion or
experience.
Objective: Objective means impartial, neutral, unbiased so it is not influenced by personal feelings
or bias. So objective information or writing is fact-based, measurable and observable.

Constructing Objective Test items.


Objective test items are highly structured test items. It requires the pupils to supply a word or
to select the correct answer from the given options. In objective type test the answer of the
item is fixed one and predetermined. In objective type test the response can be marked right
or wrong without personal opinion.
Different types of Objective tests.
The most common objective test questions are

Selective Types: Following types are included in selective type test.
1. Multiple-choice,
2. True false
3. Matching items.

Supply types: Following types are included in supply type test.
1. Fill in the Blanks
2. Short Answer
1. Multiple choice questions -
Probably the most commonly used objective question are the multiple-choice question.
Multiple-choice question consists of two parts:
i. the stem - the statement or question.
ii. the choices - also known as the distractors. There are usually 3 to 5 options. that will
complete the stem statement or question
Students/candidates/examinee have to select the correct choice, from the given options that
completes the thought expressed in the stem. There is a 20% chance that you will guess the
correct choice if there are 5 choices listed. Although multiple choice questions are most often
used to test your memory of details, facts, and relationships, they are also used to test your
comprehension and your ability to solve problems. Reasoning ability is a very important skill
for doing well on multiple choice tests.

• stem - the text of the question


• options - the choices provided after the stem
• the key - the correct answer in the list of options
• distracters - the incorrect answers in the list of options
2. True-False Questions
The true-false question has only two options. Your chances are always 50-50 with this type of
item. These test items measure the ability of the students to identify and match correct or wrong
statements of the facts, definitions, terms, principles and the like.
3. Matching Questions
Matching questions give students opportunity for guessing. Matching items occur in two
columns along with direction on the basis of which the two columns are to be matched. The
first column for which matchings are made called as ‘premises’ and the second column for
which the selections are made are called ‘Responses’. On the basis of which the matching will
be made are described in the ‘Directions’.
Advantages of Objective Type Items
• Quick and easy to score, by hand or electronically
• Can be written so that they test a wide range of higher-order thinking skills
• Can cover lots of content areas on a single exam and still be answered in a class period
• It possesses very powerful discriminating power.
• Variety of learning objectives can be measured with the help of objective type items.
• These are more valid and reliable.
• It possesses objectivity in scoring.
Disadvantages of Objective Type Items
• Often test literacy skills: “if the student reads the question carefully, the answer is
easy to recognize even if the student knows little about the subject”
• Provide unprepared students the opportunity to guess, and with guesses that are right,
they get credit for things they don’t know
• Take time and skill to construct (especially good questions)
• Encourage guessing, and reward for correct guesses
• Encourage students to memorize terms and details, so that their understanding of the
content remains superficial
Characteristics of Objective type test:

Followings are the characteristics of Objective type test.

1. Objective type tests are pin-pointed and fixed a single, definite answer.
2. Objective type test ensure perfect objectivity in scoring.
3. In Objective type tests scoring do not vary from examiner to examiner.
4. Objective type test covers a wide range of content.
5. It can be scored objectively and easily.
6. The scoring will not vary from time to time or from examiner to examiner.
7. Objective type test reduces role/habit of cramming of expected questions.
8. Objective type tests have greater reliability and better content validity.
9. This type of question has greater motivational value.
10. It possesses economy of time, for it takes less time to answer than an essay test.
11. N Objective type test comparatively, many test items can be presented to students.
12. It also saves a lot of time of the scorer.
13. It eliminates extraneous (irrelevant) factors such as speed of writing, fluency of
expression, literary style, good handwriting, neatness, etc.
14. It measures the higher mental processes of understanding, application, analysis,
prediction and interpretation.
15. It permits stencil, machine or clerical scoring. Thus scoring is very easy.
Limitations of Objective Type Test:
1. In Objective type test ability like to organize matter, ability to present matter logically
etc., cannot be evaluated.
2. In Objective type test guessing is possible.
3. No doubt the chances of success may be reduced by the presence of a large number of
items.
4. If a respondent marks all responses as correct, the result may be misleading.
5. Construction of the objective test items is difficult while answering them is quite easy.
6. They demand more of analysis than synthesis.
7. Linguistic/language ability of the testee is not at all tested.
8. Printing cost considerably greater than that of an essay test.
Guidelines/Rules for Constructing Objective Type Test Items:
1. Item writer should have a thorough understanding of the subject matter;
2. Item writer should have a thorough understanding of the pupils tested;
3. Each item must be clearly expressed i.e. there must be precision/accuracy in
writing the test items.
4. Objective type test must be for important facts and knowledge and not
for unimportant details.
5. Avoid ambiguous statements.
6. Each item should be subjected to one and only one interpretation.
7. Quantitative rather than qualitative words should be used.
8. Words such as few, many, low, high, large, etc. are vague, indefinite, and,
therefore, should be avoided.
9. Use good grammar and sentence structure to improve clarity.
10. Avoid lifting statements exact from the text-book.
11. The use of text book language in a test encourages a pupil to memories rather
than to understand the subject matter.
12. Avoid negative questions whenever possible.
13. An unselective use of the negative should be avoided. It takes more time to
answer.
14. Directions to questions should be specific.
15. Ambiguous wording and double negatives should be avoided in questions.
Scoring Test:
Scoring the test is very important, because on the basis of scoring examiner and
administration have to make decisions about the candidates. So, following points should
be considered while scoring the test.
1. Developing a proper scoring scheme.
2. Score all answers papers of one question at one time before going to the next one.
3. Suggest the best possible answer and give the proper weightage to various points.
4. let the final grade by the average of the two scorers given to a particular test.
5. Write comments to the examinee's responses, and mention errors on their papers.
6. While scoring apply uniform standards to all the papers.
7. Recheck script to avoid any mistakes in scoring.
8. Score the answer sheet without looking at the students’ name.
Essays Type Test
In classroom testing essay type test are popularly used. Especially it is intensively used in
higher education. Learning out comes concerning ability to recall, organize and integrate ideas
and ability to express oneself in writing can be measured with the help of essay type test.
Essay type tests are those tests in which examinees are asked to discuss, enumerate,
compare, state, evaluate, analyze, summarize the specific topic through writing. In essay
type test students have to write their own ideas. Essay questions provide a complex quick
that requires written responses, which can vary in length from a couple of paragraphs to
many pages.
Essay questions are supply or constructed response type questions and can be the

best way to measure the students' higher order thinking skills, such as applying,

organizing, synthesizing, integrating, evaluating, or projecting while at the same

time providing a measure of writing skills. The student has to formulate and write

a response, which may be detailed and lengthy. The accuracy and quality of the

response are judged by the teacher.

Essay questions provide a complex prompt that requires written responses, which

can vary in length from a couple of paragraphs to many pages. Like short answer

questions, they provide students with an opportunity to explain their understanding

and demonstrate creativity, but make it hard for students to arrive at an acceptable

answer by bluffing. They can be constructed reasonably quickly and easily but

marking these questions can be time-consuming and grade agreement can be

difficult.

Essay questions differ from short answer questions in that the essay questions are

less structured. This openness allows students to demonstrate that they can integrate

the course material in creative ways. As a result, essays are a favoured approach to

test higher levels of cognition including analysis, synthesis and evaluation.

However, the requirement that the students provide most of the structure increases

the amount of work required to respond effectively. Students often take longer time

to compose a five paragraph essay than they would take to compose paragraph

answer to short answer questions.


Essay items can vary from very lengthy, open ended end of semester term papers

or take home tests that have flexible page limits (e.g. 10-12 pages, no more than 30

pages etc.) to essays with responses limited or restricted to one page or less. Essay

questions are used both as formative assessments (in classrooms) and summative

assessments (on standardized tests). There are 2 major categories of essay questions

-- short response (also referred to as restricted or brief) and extended response.

1. Restricted Response: more consistent scoring, outlines parameters of

responses

2. Extended Response Essay Items: synthesis and evaluation levels; a lot of

freedom in answers
Restricted Response Essay Items
An essay item that poses a specific problem for which a student must recall proper
information, organize it in a suitable manner, derive a defensible conclusion, and
express it within the limits of posed problem, or within a page or time limit, is called
a restricted response essay type item. The statement of the problem specifies
response limitations that guide the student in responding and provide evaluation
criteria for scoring.

Example 1:
List the major similarities and differences in the lives of people living in Islamabad
and Faisalabad.
Example 2:
Compare advantages and disadvantages of lecture teaching method and
demonstration teaching method.

When Should Restricted Response Essay Items be used?


Restricted Response Essay Items are usually used to:-
Analyze relationship
Compare and contrast positions
State necessary assumptions
Identify appropriate conclusions
Explain cause-effect relationship
Organize data to support a viewpoint
Evaluate the quality and worth of an item or action
Integrate data from several sources

Extended Response Essay Type Items


An essay type item that allows the student to determine the length and complexity
of response is called an extended-response essay item. This type of essay is most
useful at the synthesis or evaluation levels of cognitive domain. We are interested
in determining whether students can organize, integrate, express, and evaluate
information, ideas, or pieces of knowledge the extended response items are used.

Example:
Identify as many different ways to generate electricity in Pakistan as you can? Give
advantages and disadvantages of each. Your response will be graded on its
accuracy, comprehension and practical ability. Your response should be 8-10 pages
in length and it will be evaluated according to the RUBRIC (scoring criteria)
already provided.
Over all Essay type items (both types restricted response and extended response) are

Good for:
Application, synthesis and evaluation levels

Types:
Extended response: synthesis and evaluation levels; a lot of freedom in answers
Restricted response: more consistent scoring, outlines parameters of responses

Advantages:
Students less likely to guess
Easy to construct
Stimulates more study
Allows students to demonstrate ability to organize knowledge, express
opinions, show originality.

Disadvantages:
Can limit amount of material tested, therefore has decreased validity.
Subjective, potentially unreliable scoring.
Time consuming to score.

Tips for Writing Good Essay Items:


Provide reasonable time limits for thinking and writing.
Avoid letting them to answer a choice of questions (You won't get a good
idea of the broadness of student achievement when they only answer a set
of questions.)
Give definitive task to student-compare, analyze, evaluate, etc.
Use checklist point system to score with a model answer: write outline,
determine how many points to assign to each part
Score one question at a time-all at the same time.
Chapter No. 04
Assessment Techniques in affective and Psychomotor Domain, observation, Self Report,
Questionnaire, interview, Rating Scale, Anecdotal Record, Checklist, Peer appraisal.
Observation
• Observation is the process of closely observing or monitoring something or someone.
• Observation is the act of noticing something or a judgment.
• Observation is watching what people do.
There are different types of observation
1. Controlled Observations
2. Natural Observations
3. Participant Observations
Controlled Observation
Controlled observations or structured observation is that observation in which observer
decides where the observation will take place, at what time, with which participants, in
what circumstances and uses a standardized procedure. The examiner systematically
classifies the behavior they observe into different categories. Coding might involve
numbers or letters to describe a characteristics, or use of a scale to measure behavior
intensity/amount.
Naturalistic Observation
Naturalistic observation (i.e. unstructured observation) involves observing the
spontaneous/natural behavior of participants in natural surroundings. The observer simply
records what he sees. Compared with controlled/structured methods it is like the difference
between studying wild animals in a zoo and studying them in their natural environment.
Participant Observation
Participant observation is a different of the above (natural observations) but here the
observer joins in and becomes part of the group that is being observed to get a deeper
insight into their lives. If it were research on animals we would now not only be studying
them in their natural habitat but be living alongside them as well!
Recording of Data of observation
With all observations an important decision the observer has to make is how to classify and
record the data. Usually this will involve a method of sampling. The three main sampling
methods are:
1. Event sampling. The observer decides in advance what types of behavior (events)
he is interested in and records all occurrences. All other types of behavior are
ignored.
2. Time sampling. The observer decides in advance that observation will take place
only during specified time periods (e.g. 10 minutes every hour, 1 hour per day) and
records the occurrence of the specified behavior during that period only.
3. Instantaneous (target time) sampling. The observer decides in advance the pre-
selected moments when observation will take place and records what is happening
at that instant. Everything happening before or after is ignored.
Self-Report
A self-report is a type of assessment form in which respondents read the question and select a
response by themselves such as personality traits, moods, thoughts, attitudes, preferences, and
behaviors without researcher interfering. A self-report is a technique which involves asking
the participants about their feelings, attitudes, beliefs and so on. Self-reports are often used as
a way of gaining participants' responses for different assessment procedures.
Self-report is a comparatively simple way to collect data from many people quickly and at low
cost. Self-report data can be collected in various ways such as through Questionnaires,
Internet, interview in person or over the telephone. Through self report researchers can collect
data regarding behaviors that cannot be observed directly.
Questionnaire: Questionnaire is a set of questions usually in a highly structured written
form. Questionnaires can contain both open questions and closed questions and
participants record by their own.
Closed questions are questions which provide a limited choice especially if the answer must
be taken from a predetermined list. Such questions provide quantitative data, which is easy to
analyze. However these questions do not allow the participant to give in-depth insights.
An open-ended question asks the respondent to formulate his own answer. Open questions
are those questions which invite the respondent to provide answers in their own words and
provide qualitative data. Although these types of questions are more difficult to analyze, they
can produce more in-depth responses and tell the researcher what the participant actually
thinks, rather than being restricted by categories.
There are following types of questionnaires:
Computer questionnaire. Respondents are asked to answer the questionnaire which is sent
by mail. The advantages of the computer questionnaires include their inexpensive price, time
can be saved, and respondents do not feel pressured, therefore can answer when they have
time, giving more accurate answers. However, the main shortcoming of the mail
questionnaires is that sometimes respondents do not bother answering them and they can just
ignore the questionnaire.
Telephone questionnaire. Researcher may choose to call potential respondents with the aim
of getting them to answer the questionnaire. The advantage of the telephone questionnaire is
that, it can be completed during the short amount of time. The main disadvantage of the phone
questionnaire is that it is expensive most of the time. Moreover, most people do not feel
comfortable to answer many questions asked through the phone and it is difficult to get sample
group to answer questionnaire over the phone.
In house survey. This type of questionnaire involves for the researcher visiting respondents
in their house or workplaces. The advantage of in house survey is that more focus towards the
questions can be gained from respondents. However, in house surveys also have a range of
disadvantages which include its being time consuming, more expensive and respondents may
not wish to have the researcher in their houses or workplaces for various reasons.
Mail Questionnaire. This sort of questionnaires include for the researcher to send the
questionnaire list to respondents through post, often attaching pre-paid envelope. Mail
questionnaires have an advantage of providing more accurate answer, because respondents can
answer the questionnaire in their spare time. The disadvantages associated with mail
questionnaires include them being expensive, time consuming and sometimes they end up in
the bin put by respondents.
Multiple choice question– respondents are offered a set of answers they have to choose from.
Dichotomous Questions. This type of questions within questionnaire gives two options to the
respondent – yes or no, to choose from and is the easiest form of questionnaire for the
respondent in terms of responding it.
Scaling Questions. Also referred to as ranking questions, they present an option for
respondents to rank the available answers to the questions on the scale of given range of values
(for example from 1 to 10).
Rating Scale
One of the most common rating scales is the Likert scale. A statement is used and the
participant decides how strongly they agree or disagree with the statements. For example the
participant decides "strongly agree", "agree", "undecided", "disagree", and "strongly
disagree". One strength of Likert scales is that they can give an idea about how strongly a
participant feels about something. This therefore gives more detail than a simple yes no
answer. Another strength is that the data are quantitative, which are easy to analyse
statistically. However, there is a tendency with Likert scales for people to respond towards the
middle of the scale, perhaps to make them look less extreme. As with any questionnaire,
participants may provide the answers that they feel they should. Moreover, because the data is
quantitative, it does not provide in-depth replies.
Interview: Interview is technique of collecting information orally from others in face to face
situation. It is two-way technique to interchange ideas and information. In interview
interviewer put questions to interviewee and get information verbally.
The word "interview" refers to a one-on-one conversation with one person acting in the role
of the interviewer and the other in the role of the interviewee.
There are some types of Interview.

The more you know about the style of the interview, the better you can
prepare. The Telephone Interview
Often companies request an initial telephone interview before inviting you in for a face to face
meeting in order to get a better understanding of the type of candidate you are. The one benefit of
this is that you can have your notes out in front of you. You should do just as much preparation as
you would for a face to face interview, and remember that your first impression is vital. Some
people are better meeting in person than on the phone, so make sure that you speak confidently,
with good pace and try to answer all the questions that are asked.
The Face-to-Face Interview
This can be a meeting between you and one member of staff or even two members.
The Panel Interview
These interviews involve a number of people sitting as a panel with one as chairperson. This
type of interview is popular within the public sector.
The Group Interview
Several candidates are present at this type of interview. You will be asked to interact with each
other by usually a group discussion. You might even be given a task to do as a team, so make
sure you speak up and give your opinion.
The Sequential Interview
These are several interviews in turn with a different interviewer each time. Usually, each
interviewer asks questions to test different sets of competencies. However, if you are asked
the same questions, just make sure you answer each one as fully as the previous time.
The Lunch / Dinner Interview
This type of interview gives the employer a chance to assess your communication and
interpersonal skills as well as your table manners! So make sure you order wisely (no spaghetti
Bolognese) and make sure you don’t spill your drink (non-alcoholic of course!).
All these types of interviews can take on different question formats, so once you’ve checked
with your potential employer which type of interview you’ll be attending, get preparing!
Competency Based Interviews
These are structured to reflect the competencies the employer is seeking for the particular job.
These will usually be detailed in the job .
Formal / Informal Interviews
Some interviews may be very formal, others may be very informal and seem like just a chat
about your interests. However, it is important to remember that you are still being assessed,
and topics should be friendly and clean!
Portfolio Based Interviews
In the design / digital or communications industry it is likely that you will be asked to take your
portfolio along or show it online. Make sure all your work is up to date without too little or too
much. Make sure that your images if in print are big enough for the interviewer to see properly,
and always test your online portfolio on all Internet browsers before turning up.
The Second Interview
You’ve pass the first interview and you’ve had the call to arrange the second. Congratulations!
But what else is there to prepare for? You did as much as you could for the first interview! Now
is the time to look back and review. You may be asked the same questions you were asked before,
so review them and brush up your answers. Review your research about the company; take a look
at the ‘About Us’ section on their website, get to know their client base, search the latest news on
the company and find out what the company is talking about.
General Interview Preparation
Here’s a list of questions that you should consider your answers for when preparing…
• Why do you want this job?
• Why are you the best person for the job?
• What relevant experience do you have?
• Why are you interested in working for this company?
• What can you contribute to this company?
• What do you know about this company?
• What challenges are you looking for in a position?
• Why do you want to work for this company?
• Why should we hire you?
• What are your salary requirements?

• The Telephone Interview. ...


• The Face-to-Face Interview. ...
• The Panel Interview. ...
• The Group Interview. ...
• The Sequential Interview. ...
• The Lunch / Dinner Interview. ...
• Competency Based Interviews. ...
• Formal / Informal Interviews.
• Structured Interview
• Unstructured Interview
• The Working Interview
What is an anecdotal record?
An anecdotal record is like a short story that educators use to record a significant incident that
they have observed. Anecdotal records are usually relatively short and may contain
descriptions of behaviors and direct quotes.
Why use anecdotal records?
Anecdotal records are easy to use and quick to write, so they are the most popular form of
record that educators use. Anecdotal records allow educators to record a child’s specific
behavior or the conversation between two children. These details can help educators plan
activities, experiences and interventions. Because they can be written after the fact, when an
educator is on his break or in different activities.
How do I write an anecdotal record?
Anecdotal records are written after the fact, so use the past tense when writing them. Being
positive and objective, and using descriptive language are also important things to keep in
mind when writing your anecdotal records. Remember that anecdotal records are like short
stories; so be sure to have a beginning, a middle and an end for each anecdote.

Checklist
A checklist is a list of items you need to verify, check or inspect. Checklists are used in every
field — from building inspections to complex medical surgeries, or educational point of view
etc. Using a checklist allows you to ensure you don’t forget any important steps that you have
to check.
A checklist is a list of all the things that you need to do, information that you want to find out,
or things that you need to take somewhere, which you make in order to ensure that you do not
forget anything. So a list of items, facts, names, etc, to be checked.
Why we use checklist:
It is easy for us to forget things and recovery is usually more complex than getting it right the
first time. A simple tool that helps to prevent these mistakes is the checklist. A checklist is
simply a list of the required things. There are seven benefits to using a checklist:
1. Organization: Checklists can help us stay more organized by assuring we don't skip any
steps in a process. They are easy to use and effective.
2. Motivation: Checklists motivate us to take action and complete tasks. Since checklists can
make us more successful, it becomes an honorable circle where we are motivated to
accomplish more due to the positive results.
3. Productivity: By having a checklist you can complete dull tasks more quickly and
efficiently, and with less mistakes. You become more productive and accomplish more each
day.
4. Creativity: Checklists allow you to master the boring tasks and utilize more brain power
for creative activities. Since the checklist means rarer mistakes and less stress, you not only
have more time to be creative, you have the ability to think more clearly.
5. Delegation: By breaking down tasks into specific tasks, checklists give us more confidence
when delegating or assigning activities. When we are more comfortable that tasks will be done
correctly, we delegate more and become significantly more productive.
6. Saving lives: Checklists can literally save lives. When the U.S. Army Air Corps introduced the
B-17 bomber during WWII an experienced aviator crashed the plane during its second
demonstration flight. After this tragedy the Army required that pilots use a checklist before taking
off. This is the same type of checklist we see pilots use today that helps to avoid crashes. Checklists
also reduce deaths in hospitals. When checklists have been implemented for use by surgical teams,
deaths dropped 40 percent. Similar results have been seen when checklists are required for doctor's
introducing central lines into their sick patients.
7. Excellence: Checklists allow us to be more effective at taking care of customers. By helping
to assure that you provide superior customer service we can achieve excellence in the eyes of
the customer. Excellence is a differentiator that improves brand equity.
Using checklists ensures that you won't forget anything. So, if you do something again and
again, and want to do it right every time, use a checklist. So Checklists free up mental RAM.

Peer Appraisal
Employee/workers assessments conducted by colleagues in the direct working environment
i.e. the people the employee interacts with regularly. Peer appraisals are a form of performance
appraisal which is designed to monitor and improve job performance. The peer appraisal
process include insight and knowledge – workers have their ‘ear to the ground’ and are often
in the best position to appraise a colleague’s performance.
Peer appraisal is a type of feedback system in the performance appraisal process. The system
is designed to monitor and improve the job performance. It is usually done by colleagues who
are a part of the same team. This type of appraisal system excludes supervisors or managers.
Description: As a part of the appraisal process, an employee is assessed based on the
feedback given by his/her colleagues or people within his/her close working environment.
This feedback is secret. A typical peer appraisal does not take feedback from superiors. It
is meant to monitor and improve job performance.
Why should one do peer appraisal?
Employees can assess the skills of their co-workers much more clearly than management
because they work together. It helps in team-building. People understand that opinions of their
colleagues are important and one must build relationships. Since people trust their co-
workers, they consider the feedback to be constructive. It makes the process of skill
improvement public and accountable.
Peer Appraisal is a variation of 360 degree feedback in the performance appraisal
process. Peers and teammates provide a unique perspective on performance. Peers
provide insight into an individual’s interpersonal interactions and skills. Peer appraisal is
commonly used as part of the performance appraisal process.
Chapter No: 05 Test Appraisals

Qualities of a Good Test:

A good test should have the following qualities:


1- Validity:
It means that it measures what it is supposed to measure. It tests what it ought to test. • A
test is said to be valid if it measures what it intends to measure.
• There are different types of validity:
1. Operational validity
2. Predictive validity
3. Content validity
4. Construct validity
Operational Validity
– A test will have operational validity if the tasks required by the test are satisfactory to
evaluate the definite activities or qualities.
Predictive Validity
– A test has predictive validity if scores on it predict future performance
Content Validity
– If the items in the test constitute a representative sample of the total course content to be
tested, the test can be said to have content validity.
Construct Validity
– Construct validity involves explaining the test scores psychologically. A test is
interpreted in terms of numerous research findings.
2- Reliability:
If the test is taken again by (same students, same conditions), the score will be almost the same
regarding that the time between the test and the retest is of reasonable length. If it is given
twice to same students under the same circumstances, it will produce almost the same results.
In this case it is said that the test provides consistency in measuring the items being evaluated.
It is called reliability of a test.
• Reliability of a test refers to the consistency of measures what it indented to measure.
• A test with high validity has to be reliable also.
• Valid test is also a reliable test, but a reliable test may or may not be a valid one.
Different method for determining
Reliability Test-retest method
A test is administrated to the same group with short interval. The scores are tabulated
and correlation is calculated. The higher the correlation, the more the reliability.
Split-half method.
The scores of the odd and even items are taken and the correlation between the two sets
of scores determined.
Parallel form method
Reliability is determined using two equivalent forms of the same test content.
– These prepared tests are administrated to the same group one after the other.
– The test forms should be identical with respect to the number of items, content,
difficulty level etc.
– Determining the correlation between the two sets of scores obtained by the group in
the two tests.
– If higher the correlation, the more the reliability.
Discriminating Power
• Discriminating power of the test is its power to discriminate between the upper and
lower groups who took the test.
• The test should contain different difficulty level of questions.
3- Practical:
It is easy to be conducted, easy to score without wasting too much time or effort and easy
to interpret.
4- Comprehensive:
It covers all the items that have been taught or studied. It includes items from different areas
of the material assigned for the test so as to check accurately the amount of students’
knowledge.
5- Relevant:
It measures reasonably well the achievement of the desired objectives.
6- Balanced:
It tests linguistic as well as communicative competence and it reflects the real command of the
language. It tests also appropriateness and accuracy.
7- Appropriate in difficulty:
It is neither too hard nor too easy. Questions should be progressive in difficulty to reduce stress
and tension
8- Clear:
Questions and instructions should be clear. Pupils should know what to do exactly.
9- Authentic:
The language of the test should reflect everyday dialogue.
10- Appropriate for time:
A good test should be appropriate in length for the allotted time.
11- Objective:
If it is marked by different teachers, the score will be the same. Marking process should not be
affected by the teacher’s personality. Questions and answers are so clear and definite that the
marker would give the students the score he/she deserves.
12- Economical:
It makes the best use of the teacher’s limited time for preparing and grading and it makes the
best use of the pupil’s assigned time for answering all items.
13-Useful
A good test should be useful. What defines or constitutes a useful test? Well, this would be a
balancing of a number of factors including:
▪ Length – a shorter test is generally preferred
▪ Time – a test that takes less time is generally preferred
▪ Low cost – speaks for itself
▪ Easy to administer
▪ Easy to score
▪ Differentiates between candidates – a test is of little value if all the applicants obtain
the same score
▪ Adequate test manual – provides a test manual offering adequate information
and documentation
▪ Professionalism – is produced by test developers possessing high levels of expertise
14-Objectivity
• A test is said to be objective if it is free from personal biases in interpreting its scope as
well as in scoring the responses.
• Objectivity of a test can be increased by using more objective type test items and
the answers are scored according to model answers provided.
15-Comprehensiveness
• The test should cover the whole syllabus.
• Due importance should be given all the relevant learning materials.
• Test should be cover all the expected objectives.
16-Simplicity:
Simplicity means that the test should be written in a clear, correct and simple language, it is
important to keep the method of testing as simple as possible while still testing the skill you
intend to test. (Avoid ambiguous questions and ambiguous instructions).

Test Items Bank

Preparing test item is very useful task in testing process. Test Item bank means a

collection of a lot of test items. In test item bank every type of test item is included.

Different difficulty level of items are included. Test item bank helps to select or choose

different test items according to our test need and demand. Due to test item bank we

can save our time as well as have no need to construct item urgently because we have

a lot of test items in test bank. In test item bank test items are collected after proper item

analysis and after ensure the validity and reliability of test items.
Item analysis:

Item analysis is a procedure which determines the effectiveness of each item in a


test. It is a mathematical approach to assessing an item's utility. It provides
statistics of performance for each and every type of item in a test. It also provides
information concerning how well each item in the test functioned. Item analysis
is the back bone of test development. Test construction is fruitful only through
the well-thought, careful, and sophisticated process of item analysis.

Item analysis raises and answers at least three questions about each item in a test.

a. How difficult is the item?


b. Does the item discriminate between the good and poor student?
c. How effective is each distracter in the item?

According to Termazi (1984).

"Item analysis procedure provides the difficulty level of


item, the discrimination power of the item and
effectiveness of each distracter. Item analysis information
tell us if an item was too easy or too hard, how well it
discriminated between high and low scorer on the test, and
whether all the distracters functioned as intended."

Procedure of Item Analysis:


Procedure of item analysis is included on two
stages l. First stage is included on 4. steps
2. Second stage is included on Item Analysis working Sheet.
First Stage:
R.L. Ebel and D.A. Frisbie (1991 ) suggested the followmg steps for
the procedure of item analysis.
Step.No. 1. After scoring the test, arrange the test papers in order from the highest
score to the lowest score.
Step. No. 2.Select the 27% OR 25% of the total papers with the highest score and
call it the high-scoring group.
Step. No.3. Select the same (27% OR 25%) of the total papers with the lowest
score and call it the low-scoring group.
Note: The middle group of papers is not needed in item analysis. For Example
if we had 40 students papers,we would select 10 papers in the high-scoring
group and 10 papers in the low-scoring group.
Step. No.4. Tabulate the responses of students of both high-and-low-scoring
groups on each test item as shown in table 3. I (Item analysis
tabulation sheet).
Grading and Reporting Systems
Topics

1- Functions of Grading and Reporting Systems


2- Assigning letter grades
3- Relative Versus Absolute Grading
4- Record Keeping and Grading Software
5- Use of Feedback

2
Topic-1

FUNCTIONS OF GRADING AND REPORTING SYSTEMS

3
1- Functions of Grading and Reporting Systems

Grading and reporting systems serve following


functions 1- Instructional uses
2- Report to parents/guardians
3- Administrative and guidance uses

4
Instructional Uses

The main focus of grading and reporting systems is the improvement of students’
learning. This occurs when the report
A- Clarifies the instructional objectives
B- Indicates the students strengths and weaknesses
C- Provides information regarding personal-social development
D- Contributes to the students motivation
❖ Day-to-day assessment of learning and the feedback can improve students’ learning.
❖ Periodic progress reports influence students’ motivation.
❖ Well-designed progress report can be helpful in evaluating instructional procedures.

5
Report to Parents/Guardians

 The primary function of grading and reporting systems is to inform


parents about the progress of their children.

 What were the objectives and how well they were achieved can be
learnt from these reports.
By having this information
 Parents are better able to cooperate with school.

 They can give emotional support and encouragement to children.

 They can help them in making educational and vocational plans.
To serve these purposes reports should contains as much information as
parents can understand.

6
Administrative and Guidance Uses

 Grading and reporting systems can also be used


for I- Promotion

II- Awarding honors

III- Reporting to other schools & prospective employers

 For most administrative purposes single letter grade is used.

 Counselors use these reports to help students make realistic educational
plans and in adjustment problems.
For serving these purposes the reports must be comprehensive and detailed

7
Topic- 2

ASSIGNING LETTER GRADES

8
2- Assigning Letter Grades

 School use the A,B,C,D,F grading system. While assigning


grades teachers come across the following questions:
1- What should be included in a letter grade?
2- How should achievement data be combined in signing letter grade?
3- What frame of reference should be used in grading?

9
Determining what to include in a grade

 When letter grades represent only achievement, they are considered most
meaningful and useful.

 If we include effort, amount of work completed, personal conduct etc. Their
interpretation will become confused and they lose their meaningfulness. 

 For example a letter grade of “B” may represent average achievement with
outstanding effort and excellent conduct or high achievement with little effort.

 Only by using letter grades for achievement and separating other aspects can
improve our description of student learning and development.

10
Combining data and assigning grades

When we include the aspects of achievement like tests, written reports, or


performance ratings in a letter grade and decide to give emphasis to each aspect,
then the next step is to combine the various elements so that each element
receives its intended weight.
For example
If we decide that the final examination should count 40%, the mid term 30%,
laboratory performance 20%, we will want our course grade to reflect these
emphasis.
A typical procedure is to combine the elements in to a composite score by assigning
appropriate weights to each element and then use these composite scores as a
basis for grading.

11
Selecting the proper frame of reference for grading

 One of the following frame of reference is used for assigning letter grades

1- Performance in relation to other group members(relative grading)


2- Performance in relation to specified standards(absolute grading)

12
Cont...
1- Assigning grades on relative basis involves comparing a student performance
with that of a reference group/classmates. In this system a student grade is
determined by his relative ranking in that group rather than by some absolute
standards of achievement.
 Student’s performance and the performance of the group influence the grade
as the grading is based on relative performance.
2- Assigning grades on an absolute basis involves comparing a student
performance to specified standard set by the teacher/school.
 These standards may be concerned with the degree of mastery to be achieved
by the students and may be specified (a)- Task to be performed (e.g. type 40
words per minute without error) or (b)- The percentage of correct answers to
be obtained on a test designed to measure a clearly defined set of learning
tasks. In this type of grading id all students demonstrate a high level of
mastery, all will receive high grades.
13
Topic-3

RELATIVE VERSUS ABSOLUTE GRADING

14
3- Relative Versus Absolute Grading

Relative Grading
 Assigning letter grade on the basis of a student’s rank in the group.

 Before assigning grades, proportion of As, Bs, Cs, Ds, and Fs is determined.

 Grading on the basis of normal curve results in equal percentage of As and
Fs, and Bs and Ds.

15
Suggested Distribution of Grading

A = 10% to 20% of the students


B = 20% to 30% of the students
C = 30% to 50% of the students
D = 10% to 20% of the students
F = 0% to 10% of the students

16
Absolute Grading

 Letter grades in an absolute system may be defined as the degree to which the
objectives have been achieved.

 A = Outstanding. Students has mastered major and minor instruction goals

 B = Very good . Student has mastered all the major instructional goals and most of
the minor ones

 C = Satisfactory. Student has mastered all major goals but just a few of minors

 D = Very weak. Student has mastered just a few of the major and minor
instructional goals remedial work would be desirable.

 F= Unsatisfactory. Student has not mastered any of major instructional goals and
lacks the essentials needed for the next highest level of instruction .Remedial work
is needed.

17
Scores in terms of percentage of correct answers

A = 95% to 100% correct


B = 85% to 94% correct
C = 75% to 84% correct
D = 65% to 74% correct
F = below 65% correct
 Distribution of grade is not predetermined in absolute grading system. All
students can receive high grades if they demonstrate high level of mastery.

 A comprehensive report includes a check list of objectives to inform students
and parents which objectives have been mastered and which have not been
mastered

18
Topic-4

RECORD KEEPING AND GRADING SOFTWARE

19
4- Record Keeping and Grading Software

 Specialized software available to facilitate the common tasks of recording and


combining grades.

 Most computer gradebook software is based on underline spreadsheet design. 

 The software may have templates to add in data entry and simple procedures for
specifying rules for combining grades from several sources like homework , tests,
and projects.

 The software provides various options for printing, reporting, and summarizing
results.

 Sometimes this type of software is linked to software designed to perform other
functions such as test constructions, test administration ,or keeping attendance.

 Existing software is constantly being updated so a search of the Internet is a good
way to bring such a list up to date.

20
TOPIC-5

USE OF FEEDBACK

21
5- Use of Feedback

 Feedback can serve a number of purposes and take a


number of forms. It can be provided as a single entity or a
combination of multiple entities.

22
Informal Feedback

 Informal feedback can occur at any times as it is something


that emerges spontaneously in the moment or during action.
Therefore informal feedback requires the building of rapport
with students to effectively encourage, couch or guide them in
daily management and decision-making for learning. This
might occur in the classroom, over the phone, in an online
forum or virtual classroom.

23
Formal Feedback

 Formal feedback is planned and systematically scheduled into


the process. Usually associated with assessment tasks, formal
feedback includes the likes of marking criteria, competencies
or achievement of standards, and is recorded for both the
students and is recorded for both the students and
organization as evidence.

24
Formative Feedback

 The goal of formative assessment is to monitor student learning


and to provide feedback to improve their learning. It can also
be used by teachers to improve their teaching. It helps students
to improve and prevent them from making the same mistakes
again.

25
Summative Feedback

 The goal of summative assessment is to evaluate student


learning at the end of instructional unit by comparing it against
some standard or benchmark . Therefore summative feedback
consist of detail comment that are related to specific aspects
of their work, clearly explains how the mark was derived from
the criteria provided and additional constructive comments on
how this work could be improved .

26
Student Peer Feedback

 With basic instruction and ongoing support students can


learn to give quality feedback, that can be highly valued by
peers. Providing students with opportunities to give and
receive peer feedback enriches their learning experiences
and develops their professional skill.

27
Student Self Feedback

 During the provision of feedback teachers not only provide


direction for the students but also to teach them by modeling
and instructions the skill of self assessment and goal setting,
leading them to become independent(Sackstein,2017) To help
students reach autonomy teachers can identify, share, and
clarify learning goals

28
Constructive Feedback

 This type of feedback is specific, issue-focused and based on observation.


It has four types:
1- Negative feedback: corrective comments about passed behavior. It focuses
on behavior that was not successful and shouldn’t be repeated
2- Positive feedback: affirming comments about past behavior. It focuses
on behavior that was successful and should be continued.
3- Negative feed-forward: corrective comments about future performance.
Focuses on behavior that should be avoided in the future.
4- Positive feed-forward: affirming comments about future behavior.
It focused on behavior that will improve performance in the future

29
Administration Of Classroom Tests And Assessments
Guiding principle

“All students must be given a fair chance to demonstrate their achievement of


the learning outcome being measured”
Things that create test anxiety

 Warning students to do their best as test is important



 Telling students to work fast in order to finish on time

 Threatening dire consequences if they fail
Suggestions for administration

1- Avoid unnecessary talk in the beginning of a test


 Students are mentally set for the test and unnecessary talk

 May influence their recall of information or

 May increase test anxiety

 Create hostility toward teacher
2- Avoid interruptions

 You may hang



 “Do not disturb-Testing” sign outside the door
3- Do not give hints to students

 If the item is ambiguous then clarified it for the whole class.



 Helping favourite students decreases the validity of result and lower the
class moral
4-Discourage cheating

 Careful supervision and special seating arrangements can discourage cheating



 Students performance must be based on their own efforts
Steps to prevent cheating

1- Take special precautions to keep the test secure during


preparation, storage and administration.
2- Have students clear of the tops of their desks
3- If scratch paper is used, have it turned in with the test
4- Walk around the room and observe how the students are doing ( careful supervision)
5- Use special seating arrangements, if possible
6- Use two forms of test and give a different form to each row of the students
(use the same test but simply rearrange the order of the items for the second form)
7- Prepare test that students will view as relevant fair and useful
8- Create and maintain a positive attitude concerning the value of the test
for improving learning

You might also like