Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 4

I was recently working with a Year 8 student revising the idea

of highest common factors. When asked to list the factors of a number


such as 64, he did so but in a random and unorganised way. His
answers were 8, 1, 2 and 64. I told him that this was wrong as he had
missed some factors (correct) and I informed him the correct answer
was 1, 64, 2, 32, 4, 64, and 8 (direct). I then pointed out the process
he used (random) and a better process (an organised list). By
organising my list of factors in pairs (e.g. 1, 64) and starting from 1
before checking each number after that, I made sure I didn’t miss any
factors. I then showed him how to use the organised list strategy to
find the factors of 48 before letting him continue with practice
questions of his own. Not only will using the organised list strategy
help him to list all the factors of other numbers, it will help him with a
wide range of other math’s problems.

The simplest prescription for improving education must be dollops of


feedback John Hattie

“Feedback functions formatively only if the information fed back to the


learner is used by the learner to improve performance.”
— Dylan Wiliam, 'Embedded Formative Assessment'

Also, before you ask for a resubmission, why not throw in a written-
response question at the end, asking them how they incorporated your
feedback? That encourages them to not only look at your feedback again but
also compare their work against the success criteria and steps outlined in the
feedback to reach them.

What kind of feedback is best?


In a nutshell: achievable next steps that learners can put into action right away
and cause thinking (as opposed to reacting emotionally). This requires, of
course, that you know exactly

1. what the learner already knows or can do (elicited knowledge)


2. what she doesn't know yet (the stated learning goals)
3. how to break down the steps to get her her there (an action plan)
4. how to deliver this information in an accessible way

Don't just point out what's wrong; give short, helpful tips and specific examples
for improvement, or as Wiliams calls it, a recipe for future action' or a series of
activities that get learners from the current state to the goal state.

“Ask a student to tell you what they think you you are trying to say to
them”
— Professor John Hattie

It is not an evaluation of good and bad but an exploration of what helps and hinders learning and why.
In all, feedback gives everyone the chance to slow down, to breathe, to make sense of where they’ve
been, how they got there, where they should go next, and the best ways to get there together – a
decision made with students, rather than for them. (Rodgers, 2006, p. 219)

The descriptive feedback should be specific and constructive, focused on individual improvement and
progress, recognizable to students’ effort, open to opportunities for improvement, and encourages
students to view mistakes as a part of learning. When students receive this type of feedback, they will
know why they have made the mistakes they have made and will be able to improve their learning
(Tunstall & Gibbs, 1996).

Feedback in the form of words can be very motivational. After a score of 7 out of 10 has been put on a
small assignment, there is not much more that can be said. If however, teachers indicate one or two
strengths and one or two weaknesses, they have the basis for discussions with individual students to
help them improve their work. The basic principle at work here is that words open up communication,
whereas numbers close it down – prematurely at that. (O’Conner, 2002, p. 116)
For Student 1, Robert wrote, "This is correct, but explain why you divided—what are you looking to find?
Your explanations are improving—continue to include every piece of data in the explanation." He noticed
and named one strategy (including data in the explanation) that the student had been working on and did
successfully, and gave one suggestion for improvement (provide a rationale for using division). Both of
these would help the student make his reasoning more transparent to a reader, and would also help with
the state test expectations for explaining reasoning.
For Student 2, this teacher wrote next to d = rt, "Good use of the formula!" Next to the explanation, he
wrote, "62 __ ? Please refer to the question to display the units! Good explanation!" He noticed and
named one specific strength (use of the formula) and made one general comment (good explanation) and
one specific suggestion for improvement (specify the units).

According to Marzano, Pickering, and Pollock, feedback needs to the “corrective” in nature, timely,
specific to criterion, and student involved.

A mismatch between what should be tested and what is tested


Many of the objections raised to existing written tests during the early stages of
RME – described in detail in Chapter 1 – are now again being heard.
The widespread complaint heard nowadays concerns the mismatch between
what the existing tests measure and the altered goals of and approach to mathematics
Chapter 4
134
education (Romberg, Zarinnia, and Collis, 1990; Resnick and Resnick, 1992; Romberg
and Wilson, 1992). According to Joffe (1992), the weakest point often lies in
the test content. Most written tests concentrate solely on simple skills while ignoring
higher-order thinking (Grouws and Meier, 1992), and such tests cannot provide
complete information about students’ structures of knowledge (Webb, 1992). According
to Resnick and Resnick (1992) the existing tests are characterized by an assessment
of isolated and context-independent skills. In their opinion, this is due to
two assumptions: that of ‘decomposability’ and that of ’decontextualization’; both
of these assumptions, however, are more suited to a ‘routinized curriculum’ than to
the ‘thinking curriculum’ currently advocated. Resnick and Resnick thus emphatically
object to tests that consist of a large number of short problems demanding
quick, unreflective answers. In other words, the general criticism of the existing tests
is that they do not measure the depth of students’ thinking (Stenmark, 1989) or – as
stated in the ‘Assessment Standards’ (NCTM/ASWG, 1994) – that the traditional
written work does not provide the students any opportunity to show their underlying
thought processes and the way in which they make links between mathematical concepts
and skills. This is especially the case with the younger children. In these same
‘Assessment Standards’, it is stated that observing young students while they work
can reveal qualities of thinking not tapped by written or oral activities. According to
Cross and Hynes (1994), disabled students make up another group that is particularly
penalized by traditional paper-and-pencil tests1.
More specific criticism of multiple-choice tests – which is the most restricted
form of assessment (Joffe, 1990) – is that the students are not asked to construct a
solution and that such tests only reflect recognition (Feinberg, 1990, cited by Doig
and Masters, 1992; Mehrens, 1992; Schwarz, 1992). Consequently, these tests do
not measure the same cognitive skills as in a free-response form (Frederiksen, 1984,
cited by Romberg, Zarinnia, and Collis, 1990). Moreover, according to Schwarz
(1992), multiple-choice tests transmit the message that all problems can be reduced
to a selection among four alternatives, that mathematics problems necessarily have
answers, that the answers can be stated briefly, and that correct answers are unique.
4.1.1b No information on strategies
In both multiple-choice and short-answer tests, the process aspect remains out of
sight (MSEB, 1993a). Consequently, the tests cannot provide much information on
the various strategies that students employ when solving problems (Ginsburg et al.,
1992). The tests do not reveal, for instance, how an incorrect answer came about.
This lacuna, of course, causes certain repercussions. It is no wonder that the ‘Assessment
Standards’ (NCTM/ASWG, 1994) caution that evidence acquired exclusively
from short-answer and multiple-choice problems may lead to inappropriate inferences.
For this reason, Joffe (1990, p. 158) wonders:
Written assessment within RME – spotlighting short-task problems
135
“...what kind of teaching would be guided by the results of tests which assess only the
things accessible by timed multiple-choice tests.”
The strongest criticism of written assessment is voiced by the socio-constructivists.
Yackel, Cobb, and Wood (1992) stress that children’s progress cannot be sensibly
judged by looking at their answers to a page of mathematics problems, but requires,
instead, repeated face to face interactions. Mousley, Clements, and Ellerton (1992;
see also Clements, 1980) also find that the diagnosis of individual understanding necessitates
informal and frequent dialogue between teachers and children, whereby
the teacher can offer feedback and assistance. In this respect, these authors agree
with Joffe (1990), who views this kind of ‘all-or-nothing’ situation as one of the
shortcomings of traditional assessment: the students either can or cannot answer a
question. By contrast, in the interactive mode – such as in an interview situation –
students can be questioned about their methods and solutions.

You might also like