Assessment 1

You might also like

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 15

Desired Characteristics of Criteria for

Classroom Rubrics
Characteristics Explanation
The criteria are:
Appropriate Each criterion represents an aspect of a standard,
curricular goal, or instructional goal or objective that
students are intended to learn.
Definable Each criterion has a clear, agreed-upon meaning that both
students and teachers understand.
Observable Each criterion describes a quality in the performance that
can be perceived (seen/heard, usually) by someone other
than the person performing.
Distinct from one Each criterion identifies a separate aspect of the learning
another outcomes the performance is intended to assess.
Complete All the criteria together describe the whole of the learning
outcomes the performance is intended to assess.
Able to support Each criterion can be described over a range of
descriptions along a performance level.
continuum of quality
Characteristics Explanation
The descriptions of levels
of performance are…
Descriptive Performance is described in terms of what is observed in
the work.
Clear Both students and teachers understand what the
descriptions mean.
Cover the whole range of Performance is described from one extreme of the
performance continuum of quality to another for each criterion.
Distinguish among levels Performance descriptions are different enough from level
to level that work can be categorized unambiguously. It
should be possible to match examples of work to
performance descriptions at each level.
Center the target The description of performance at the level expected by
performance (acceptable, the standard, curriculum goal, or lesson objective is
mastery, passing) at the placed at the intended level on the rubric.
appropriate level
Feature parallel Performance descriptions at each level of the continuum
descriptions from level to for a given standard describe different quality levels for
level the same aspects of the work.
Criteria Quality
Did I get my Creative beginning Boring beginning No beginning
audience‘s
attention?
Did I tell what kind Tells exactly what type Not sure, not clear Didn‘t mention it
of book? of book it is
Did I tell something Included facts about Slid over character Did not tell anything
about the main character about main character
character?
Did I mention the Tells when and where Not sure, not clear Didn‘t mention setting
setting? story takes place
Did I tell one Made it sound Told part and skipped Forgot to do it
interesting part? interesting- I want to on to something else
buy it!
Did I tell who might Did tell Skipped over it Forgot to tell
like this book?
How did I look? Hair combed, neat, Lazy look Just-got-out-of-bed
clean clothes, smiled, look, head down
looked up, happy
How did I sound? Clear, strong, cheerful No expression in voice Difficult to understand–
voice 6-inch voice or
screeching
Tips in designing rubrics
Perhaps the most difficult challenge is to use clear,
precise and concise language. Terms like “creative” ,
“innovative” and other vague terms need to be avoided. If
a rubric is to teach as well as evaluate, terms like this must
be defined for students. Instead of this words, try words
that can convey ideas and which can readily observed.
Patricia Crosby and Pamela Heinz, both seventh grade
teachers (from Andrade, 2007), solved the same problem
in a rubric for oral presentations by actually listing ways in
which students could meet the criterion (fig.19). This
approach provides valuable information to students on
how to begin a talk and avoid the need to define elusive
terms like creative.
Criterion Quality
Gains attention Gives details of Does a two- Does not
of audience. an amusing fact, sentence attempt to gain
series of introduction, attention of
questions, a then starts audience, just
short speech. starts speech.
demonstration, a
colorful visual or Gives a one-
a personal sentence
reason why they introduction,
picked this topic. then starts
speech.
Specifying the levels of quality can often be
very challenging also. Spending a lot of time
with the criteria helps but in the end, what
comes out are often subjective. There is clever
technique often used to define the levels of
quality. It essentially graduates the quality levels
through the responces:“Yes”, “No but”, and
“No”.
Criterion Quality
Gives enough Yes, I put in Yes, I put in No, I didn‘t put No, I had
details. enough details some details, in enough almost no
to give the but some key details, but I details.
reader a sense details are did include a
of time, place, missing. few.
and events.
Rubrics are scales that differentiate levels of
student performance. They contain the criteria
that must be met by the student and the
judgement process that will be used to rate how
well the student has performed. An exemplar is
an example that delineates the desired
characteristics of quality in ways students can
understand. These are important parts of the
assessment process.
Well-designed rubrics include:

• Performance dimensions that are critical to successful


task completion;
• Criteria that reflect all the important outcomes of the
performance task;
• A rating scale that provides a usable, easily-interpreted
score;
• Criteria that reflect concrete references, in clear
language understandable to students, parent, and
other teachers; and other teachers; and others
Automating Performance-Based Tests
Going by the complexity of the issues that need to
be addressed in designing performance-based tests, it
is clear that automating the procedure is not an easy
task. The sets of tasks that comprise a performance
based test have a chosen carefully in order to tackle
the design issues mentioned. Moreover, automating
the procedure imposes another stringent requirement
for the design of the test. In this section, we
summarize what we need to keep in mind while
designing an automated performance based test.
We have seen that in order to automate a performance-
based test, we need to identify a set of tasks which all lead to
the solution of fairly complex problem. For the testing of
software to be able to determine whether a student has
completed any particular task, by the end of the task should be
accompanied by a definite change in the system. The testing
software can track this change in the system, to determine
whether the student has completed the task. Indeed, a similar
condition applies to every aspect of the problem solving activity
that we wish to test. In this case, a set of changes in the system
can indicate that the student has the desired competency.
Such tracking is used widely by computer
game manufacturers, where the evidence of a
game player‘s competency is tracked by the
system, and the game player is taken to the next
‘level‘ of the game.
In summary, the following should be kept
in mind as we design a performance-based
test.
• Each performance task/problem that is used in
the test should be clearly defined in terms of
performance standards not only for the end
result but also for the strategies used in
various stages of process.
• A user need not always end up accomplishing
the task; hence it is important to identify
important milestones that the test taker
reaches while solving the problem.
• Having defined the possible strategies, the
process and milestones, the selection of tasks
that comprise a test should allow the design of
good rubrics for scoring.
• Every aspect of the problem-solving activity
that we wish to test has a lead to a set of
changes in the system, so that the testing
software can collect evidence of the student‘s
competency.

You might also like