Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 7

UNIT 4 –

ASSESSMENT DURING TEACHING

 Assessment Strategies & Tips You Can Use Every Day


1. An open-ended question that gets them writing/talking
2. Ask students to reflect ; During the last five minutes of class ask students to reflect on the
lesson and write down what they’ve learned. Then, ask them to consider how they would
apply this concept or skill in a practical setting.
3. Ask students to summarize: Have students summarize or paraphrase important concepts
and lessons. This can be done orally, visually, or otherwise.
4.Hand signals: Hand signals can be used to rate or indicate students’ understanding of
content. Students can show anywhere from five fingers to signal maximum understanding to
one finger to signal minimal understanding. This strategy requires engagement by all students
and allows the teacher to check for understanding within a large group.
5. Response cards: Index cards, signs, whiteboards, magnetic boards, or other items are
simultaneously held up by all students in class to indicate their response to a question or
problem presented by the teacher. Using response devices, the teacher can easily note the
responses of individual students while teaching the whole group.
6. Four corners: A quick and easy snapshot of student understanding, Four Corners provides
an opportunity for student movement while permitting the teacher to monitor and assess
understanding.
7. Think-pair-share: Students take a few minutes to think about the question or prompt. Next,
they pair with a designated partner to compare thoughts before sharing with the whole class.
8. Choral reading: Students mark text to identify a particular concept and chime in, reading
the marked text aloud in unison with the teacher. This strategy helps students develop
fluency; differentiate between the reading of statements and questions; and practice phrasing,
pacing, and reading dialogue.
9. One question quiz: Ask a single focused question with a specific goal that can be answered
within a minute or two. You can quickly scan the written responses to assess student
understanding.
10. Socratic seminar: Students ask questions of one another about an essential question, topic,
or selected text. The questions initiate a conversation that continues with a series of responses
and additional questions. Students learn to formulate questions that address issues to facilitate
their own discussion and arrive at a new understanding.
11. 3-2-1: Students consider what they have learned by responding to the following prompt at
the end of the lesson: 3) things they learned from your lesson; 2) things they want to know
more about; and 1) questions they have. The prompt stimulates student reflection on the
lesson and helps to process the learning.
12. Ticket out the door: Students write in response to a specific prompt for a short period of
time. Teachers collect their responses as a “ticket out the door” to check for students’
understanding of a concept taught. This exercise quickly generates multiple ideas that could
be turned into longer pieces of writing at a later time.
13. Formative pencil–paper assessment
Students respond individually to short, pencil–paper formative assessments of skills and
knowledge taught in the lesson. Teachers may elect to have students self-correct. The teacher

1
collects assessment results to monitor individual student progress and to inform future
instruction.
14.Peer instruction: Perhaps the most accurate way to check for understanding is to have one
student try to teach another student what she’s learned. If she can do that successfully, it’s
clear she understood your lesson.

DESIGNING A TEST ITEM

The first step of the test cycle is designing your test. Here you formulate your learning
objectives, your purposes of testing, and you make your test plan to check if your assessment
program is in line with your learning objectives and your teaching activities.
FORMULATING GOOD LEARNING OBJECTIVES
Objectives refer to learning outcomes: statements of what a learner is expected to know,
understand and/or be able to do and demonstrate after completion of a process of learning.
Outcomes should be observable, measurable.
DECIDE ON THE PURPOSE OF TESTING
Before we think about how we should assess, what methods we will use or what we have to
do, it is important to consider why we test in the first place. What is our purpose when we
test?

There are many purposes for testing as stated on the homepage of this website, but here we
will make a distinction between testing as mean for process evaluations and testing as mean
for grading/judgement as an end-evaluation.  

The difference between formative testing and summative testing can be explained like this:


Formative assessments are in-process evaluations of student learning that are typically
administered multiple times during a course/module. The general purpose of formative
assessment is to give lecturers real-time feedback about what students are learning or not
learning so that instructional approaches, teaching materials, and academic support can be
modified accordingly. Formative assessments are usually not graded, and they may take a
variety of forms, from more formal quizzes and assignments to informal questioning
techniques and discussions with students.

Summative assessments are used to evaluate student learning at the conclusion of a specific


instructional period—typically at the end of a course/module. Summative assessments are
graded tests, assignments, or projects that are used to determine whether students have
learned what they were expected to learn during the defined instructional period.

To give a concrete example of the difference between formative and summative testing, think
about a cook in a restaurant:
When the cook (or his colleague or assistant) tastes the soup, that’s formative; when the
guests taste the soup, that’s summative.

INTERACTIVE ASSESSMENT TOOLS 

You use interactive assessment tools to get insight into their understanding of the material.
For example, during your lecture you can check on what level of understanding the students
master the material, or see if there are any misconceptions.

2
Choosing the right testing method

In order to ensure that your learning design is sound,


your learning outcomes or objectives should be in line
with the assessment that you are using to test for the
achievement of learning outcomes. In addition, both
learning outcomes and assessment should be aligned
with the teaching method. Biggs refers to this as the
"constructive alignment" (Biggs, 1999). We can imagine
the relationship between these three concepts forms a triangle; consequently it is often
referred to as the “instructional triangle of learning designs”.

The appropriate methods for testing …


 provide students adequate opportunity to demonstrate they have achieved the learning
goals
 can provide the evidence students have achieved the goals
 assess and grade students in the right (reliable) way

What to take into consideration when chosing an assessment method?


 suitability/your experience
 purpose (summative vs formative; motivate; progress)
 practical and efficient (workload, is it for instance doable when looking at the costs,
or the time available for teachers and rooms that you have available? Is it too risky to
do it real life)
 program - vision on education; policies on testing; Examination Rules
DIFFERENT TESTING METHODS
There are many different testing methods which you can use, as long as they fit with your
learning objectives and your teaching methods.In the following articles you can find more
information about different testing methods.
 Useful overview of possible assessing methods related to objectives.
 Classroom assessment techniques
TEST PLAN
A test plan is also called an assessment plan, assessment scheme or test scheme; different
words for the same thing.

A test plan helps with making sure your test is valid. It helps you making sure that you test
what you want to test. It provides an overview of all tests involved in your course/module in
relation to the learning outcomes/objectives of the course/module.

A test plan provides you a blueprint. A way to…


 ensure that proper emphasis is given according to the importance of each of the
objectives.
 show the match between what should be learned and what is tested.
 ensure that proper emphasis is given according to the importance of each of the
objectives.
 ensure the test is representative for all that should be learned.
 ensure that we test at the intended level (Bloom).
 help make sure that tests for a specific unit are similar (each year, re-sit).

3
 help you to construct suitable items. 

There are many formats which you can use, this is an example for a test plan for a course
with the most basic information in it: learning outcomes, all the tests with test methods, the
weight per test and (if there are any) special conditions.

This is another example,


Module part Learning goals Relation to overall Testing method(s)
part module goals
1) Project 1.1) … (1) (4) (6) * A) Individual
1.2) … reflection 
B) Group product 
C) Individual
presentation
2) History and 2.1)… (1) (5) Written test, open
theories of … 2.2) … questions

TEST SPECIFICATION TABLE (Blue Print)

A test specification table helps to ensure that there is a match between what should be learned
(objectives), what is taught and what is tested. It also ensures transparency (for yourself but
also for colleagues) and repeatability.
A test specification table zooms in on an individual (written) test. For assignments like a
report or presentation a test specification table isn;t necessary.

Lecturers cannot measure every topic or objective and cannot ask every question they might
wish to ask. A test specification table allows you to construct a test which focuses on the key
areas and weights those different areas based on their importance. A test specification
table provides you with evidence that a test has content and construct validity.

There are many formats possible for a test specification table. You need to provide the
relation between the learning objectives per each individual question, to mention the question
format (open, closed, essay, etc), the score/amount of points per question, the weight of each
learning objective and/or per question. You can add the book/lesson material belonging to the
question. That way you immediately know if you need to adjust your written exam if you
change anything in your lesson material. However, this is not obligatory.

CHARACTERISTICS OF A GOOD TEST ITEM

The idea for this primer series germinated from a simple question – “Could you do an article
looking at the validity of tests used in public safety assessment.” As my forgiving readership
already knows, I have trouble containing my thoughts to a single entry. So, as I began to
frame out how I would respond to the question of the validity of public safety assessments,
the amount of material I wanted to cover started to grow exponentially. At some point, I
decided it would be best to start from the beginning with a series of primers on topics related

4
to validity, building up to an answer to the question of “what is the validity of public safety
assessments.”

So now this blog will be the first in a series looking at this question. Over a series of articles
aimed to inform, but also intended to keep things simple, I will cover:

1. What are the characteristics of a good test?


2. What are some authoritative references human resource and assessment professionals
can rely upon in evaluating the worthiness of tests?
3. What is validity?
4. Are public safety assessments good tests and are they valid?
This first article in the primer series deals with the question of what is a good test. A good
test can be defined as one that is:

 Reliable
 Valid
 Practical
 Socially Sensitive
 Candidate Friendly.
Briefly and simply, I will review the meaning of each of these characteristics.
Reliable
Reliability refers to the accuracy of the obtained test score or to how close the obtained scores
for individuals are to what would be their “true” score, if we could ever know their true score.
Thus, reliability is the lack of measurement error, the less measurement error the better. The
reliability coefficient, similar to a correlation coefficient, is used as the indicator of the
reliability of a test. The reliability coefficient can range from 0 to 1, and the closer to 1 the
better. Generally, experts tend to look for a reliability coefficient in excess of .70. However,
many tests used in public safety screening are what is referred to as multi-dimensional.
Interpreting the meaning of a reliability coefficient for a knowledge test based on a variety of
sources requires a great deal of experience and even experts are often fooled or offer
incorrect interpretations. There are a number of types of reliability, but the type usually
reported is internal consistency or coefficient alpha. All things being equal, one should look
for an assessment with strong evidence of reliability, where information is offered on the
degree of confidence you can have in the reported test score.

Valid
Validity will be the topic of our third primer in the series. In the selection context, the term
“validity” refers to whether there is an expectation that scores on the test have a demonstrable
relationship to job performance, or other important job-related criteria. Validity may also be

5
used interchangeably with related terms such as “job related” or “business necessity.” For
now, we will state that there are a number of ways of evaluating validity including:

 Content
 Criterion-related
 Construct
 Transfer or transportability
 Validity generalization
A good test will offer extensive documentation of the validity of the test.

Practical
A good test should be practical. What defines or constitutes a practical test? Well, this would
be a balancing of a number of factors including:

 Length – a shorter test is generally preferred


 Time – a test that takes less time is generally preferred
 Low cost – speaks for itself
 Easy to administer
 Easy to score
 Differentiates between candidates – a test is of little value if all the applicants obtain
the same score
 Adequate test manual – provides a test manual offering adequate information and
documentation
 Professionalism – is produced by test developers possessing high levels of expertise
The issue of the practicality of a test is a subjective judgment, which will be impacted by the
constraints facing the public-sector jurisdiction. A test that may be practical for a large city
with 10,000 applicants and a large budget, may not be practical for a small town with 10
applicants and a miniscule testing budget.

Socially Sensitive
A consideration of the social implications and effects of the use of a test is critical in public
sector, especially for high stakes jobs such as public safety occupations. The public safety
assessment professional must be considerate of and responsive to multiple group of
stakeholders. In addition, in evaluating a test, it is critical that attention be given to:

 Avoiding adverse Impact – Recent events have highlighted the importance of balance
in the demographics of safety force personnel. Adverse impact refers to differences in
the passing rates on exams between males and females, or minorities and majority
group members. Tests should be designed with an eye toward the minimization of

6
adverse impact. A complicated topic, I addressed adverse impact in greater depth in
previous blog posts here and here.
 Universal Testing – The concept behind universal testing is that your exams should be
able to be taken by the most diverse set of applicants possible, including those with
disabilities and by those who speak other languages. Having a truly universal test is a
difficult, if not impossible, standard to meet. However, organizations should strive to
ensure that testing locations and environments are compatible with the needs of as wide
a variety of individuals as possible. In addition, organizations should have in place
committees and procedures for dealing with requests for accommodations.

Candidate Friendly

You might also like