Professional Documents
Culture Documents
The Role of Self - Peer and Teacher Assessment in Promoting Iranian EFL Learners Writing Performance
The Role of Self - Peer and Teacher Assessment in Promoting Iranian EFL Learners Writing Performance
To cite this article: Parviz Birjandi & Nasrin Hadidi Tamjid (2012) The role of self-, peer and
teacher assessment in promoting Iranian EFL learners’ writing performance, Assessment &
Evaluation in Higher Education, 37:5, 513-533, DOI: 10.1080/02602938.2010.549204
English Department, Islamic Azad University – Science and Research Branch, Tehran, Iran
Assessment
10.1080/02602938.2010.549204
CAEH_A_549204.sgm
0260-2938
Original
Taylor
02011
00
Mrs.
nasrin37@hotmail.com
000002011
nasrinhadidi
&
and
Article
Francis
(print)/1469-297X
Francis
& Evaluation in Higher
(online)
Education
In the last decade, with the increased attention to learner-centred curricula, the
topic of self-assessment and peer assessment has become of particular interest in
testing and evaluation. The present study explores the role of self-assessment, and
peer assessment in promoting writing performance of language learners. To do
this, 157 intermediate TEFL (Teaching English as Foreign Language) students
were assigned to five different treatments in five groups: four experimental groups
and one control group. The first experimental group did journal writing as a self-
assessment technique, the second group self-assessed their own writings, the third
group employed peer assessment, and the fourth group had both self- and peer
assessment. Moreover, there was teacher assessment in all experimental groups,
except the fourth group, i.e., the self- and peer assessment group. In the control
group, there was only teacher assessment. Also, at the beginning and end of the
semester, all participants took a writing test. The design of the study was quasi-
experimental, non-randomised control group, pre-test–post-test design. The results
revealed that in the second and third groups, in which the students employed self-
assessment and peer assessment, together with teacher assessment, we observed
the maximum improvement in writing.
Keywords: self-assessment; peer assessment; teacher assessment; journal writing;
writing performance
Introduction
The recent literature on language learning from constructivist perspective has indi-
cated that knowledge is not attained but constructed (von Glasersfeld 1989, cited in
Kim 2005) by learners, which implies that different learners construct their own
meanings. Graves (1994, cited in Wagner and Lilly 1999) notes that the central role
of the teachers in evaluation includes helping children become part of the process.
Much of the testing teachers do in schools provides a narrow picture of what students
actually know and can do. Yet, in modern views towards teaching and learning,
learner participation in the assessment is increasingly considered to be an important
feature (Oscarson 2009), which can be considered as a justification for self- and peer
assessment.
In most educational systems today, one of the basic pedagogical principles is that
good conditions for learning are best achieved if learners are actively involved in all
phases of the educational process, which is maintained by proponents of cognitive and
constructivist theories of learning (e.g. Cobb 1994; von Glasersfeld 1995). This
current trend in learner-centred language teaching approaches has led to a greater
Literature review
It has been generally argued that theories of second-language assessment (SLA) have
been little enriched with the research on the acquisition of writing. Cumming notes that
‘we are far from seeing models that adequately explain learning to write in a second
language or precisely how biliterate writing should be taught’ (1998, 68). Carson
(2001) also maintains that considering the intersection of SLA and L2 (Level 2) writing
is not an easy job. According to Carson, issues related to teaching and learning of L2
writing mainly focus on the pragmatic concerns of the classroom in which it is taught.
This is due to the fact that writing typically needs formal instruction to develop. Carson
emphasises that understanding the development of L2 in general underlies an under-
standing of the development of writing. Ellis also, emphasising that little attention has
been paid to writing in comparison with speaking, notes that ‘we know little about how
learners acquire the ability to perform acts found in decontextualised written language’
(1994, 188).
Carson (2001) argues that one way to investigate how SLA theories have
influenced models of L2 writing is to examine the issue of error and its different
perspectives in writing. On the other hand, Nelson and Kim (2001), in their research
report, propose that activity theory can be a useful framework for investigating how
students learn to write in an L2. In this theory, the sociocultural and historical
aspects of learning context are emphasised. In their report, Nelson and Kim
conclude that activity theory helps us understand the impact of sociocultural factors
on the learning of writing. Yet, Cumming (2001), investigating the studies on learn-
ing to write in second or foreign languages, found three dimensions of writing
which were the focus of attention: (a) text features, (b) the composing process, and
(c) the sociocultural context in which writing takes place. In fact, the analysis of
these three dimensions can produce different views towards instruction in L2 writ-
ing classes. With regard to text features, a number of studies have shown that L2
learners improve the complexity and accuracy of their writing (e.g. Cumming and
Mellow 1996; Ishikawa 1995; Weissberg 2000). As Cumming (2001) argues, this
means that L2 learners use more complex syntax, a larger range of vocabulary and
cohesive devices as they learn to write in an L2. The findings from the research on
text features reveal possible tendencies among L2 writers. Cumming notes that in
order to explain why and how people learn writing in L2, in addition to text
analysis, the composing process and the social context should also be taken into
consideration.
Considering the composing process, the writings of novice and expert L2 learners,
and also EFL learners’ writings in their L1 and L2 were examined. Based on the
results, it is argued that skilful L2 writers tend to do more planning (e.g. Akyel 1994;
Assessment & Evaluation in Higher Education 515
3
Sasaki 2000), revising (e.g. Hall 1990; Zamel 1983) and editing (e.g. Walters and
Wolf 1996) in their texts. Moreover, comparing L1 and L2 writings of the same learn-
ers has revealed that those who are poor at writing in their L1 seem to be unable to
plan, revise or edit their L2 writing either (e.g. Clachar 1999; Sasaki 2000). It should
be mentioned that for some researchers (e.g. Hinkel 2004; Lee 2005), the processes of
L1 writing are different from those of L2, whereas others content that there are more
similarities than differences in these two processes (e.g. Matsumoto 1995; Schoonen
et al. 2003). In their study, Mu and Carrington (2007), confirming Silva (1993), found
that L1 and L2 writing processes are different in terms of strategic, rhetorical and
linguistic aspects; however, they contend that some metacognitive, cognitive and
social/affective strategies can be transferred across languages. On the whole, it is
argued that the more proficient one gets in an L2, the more often she/he tries to plan,
revise and edit her/his writing. As these processes are mostly mental, researchers have
tried to elicit data from learners through introspective techniques such as journal
writing, think-aloud, or interviews. However, Cumming (2001) maintains that due to
the limitations in studying the composing process, for instance, the need for a highly
controlled condition for writing, the social context in which writing takes place,
should also be investigated. According to some researchers (e.g. Scollon 1990),
cultural differences can lead to problems in L2 writing, specifically in the rhetorical
organisation of writing. These cultural issues should be taken into consideration not
only in teaching and learning but also in assessment. However, the swing of the educa-
tional pendulum seems to be in favour of more learner-centred approaches towards
teaching, learning and assessment.
In modern teaching and learning, learners’ participation in assessment is increas-
ingly considered to be an important feature, not least in the area of language educa-
tion. In fact, the increasing interest in the role of self-assessment in language learning
and teaching is a logical outcome of increased interest in learner-centred teaching and
self-directed language learning (Peirce, Swain, and Hart 1993).
Theoretical framework
Chen (2008) maintains that the use of self-assessment is supported by theories of
constructivism and learner autonomy. Constructivism assumes that knowledge is
actively constructed by individuals. The constructivist perspective is based on the
view that knowledge is internal and personal to the individual. There is no such thing
as absolute knowledge. Different individuals will have different understandings of
their learning and will create their own meanings. Chen notes that aligned with this
constructivist view, through self-assessment, students actively engage in discussing
how their performance will be evaluated, reflecting on what they have achieved with
the help of peer or teacher feedback.
Another theory which is essentially constructivist is attribution theory. In this theory,
it is emphasised that ‘a person’s affective and cognitive reactions to success or failure
on an achievement task are a function of the causal attributions that are used to explain
why a particular outcome occurred’ (Whitley and Frieze 1985, cited in Oscarson 2005,
2). According to de Minzi, the theory of attribution in its wider sense refers to ‘a person’s
will to understand the causes and implications of the events he witnesses and experi-
ences’ (2004, 151). In general, all cognitive and constructivist theories have led to
increased responsibility for students and a more autonomous learning environment.
Norris (2000, cited in Nedzinskaite, Svencioniene, and Zavistanaviciene 2008) notes
516
4 P. Birjandi and N. Hadidi Tamjid
that recent ‘alternative assessment’ has stressed the usefulness of a variety of innovative
testing procedures, including self-assessment, journal writing, etc.
Peer assessment, also conducted under such names as peer response and peer
review (e.g. Caulk 1994, cited in Matsuno 2009), in process-oriented instruction can
be theoretically supported by two closely related disciplines: learning theories and
rhetorical theories (Min 2005). Regarding the first one, learning theories such as
Vygotsky’s (1978) theory on language and learning seem to be in line with imple-
menting peer assessment. According to Vygotsky, learning is a cognitive activity that
takes place in social interaction. By the same token, as Min (2005) puts it, writing is
a learning activity that can be best learned through interacting with peers. Min notes
that peer review can provide opportunities for writers with different strengths,
preferred modes of expression and levels of competence, to interact positively in oral
or written communication, including questioning, providing elaborated responses, and
instructing. With regard to rhetorical theories, Bruffee (1984, cited in Min 2005)
maintains that writing evolves from the ‘conversation’ among writers in their
discourse community. Bruffee argues that the collaborative environment which is
formed in peer groups can address high-order composition issues such as focus and
idea development. Min (2005) emphasises that students should be provided with
opportunities to immerse themselves in constructive conversation about writing. A
number of researchers, such as Tsui and Ng (2000), have also recommended peer
review as a collaborative activity to secure immediate textual improvement and, in the
long run, to develop writing competence through mutual scaffolding.
Along with these theories of learning, Chen (2008) argues that self-assessment
also reflects new thinking about classroom assessment: assessment for learning. Chen
argues that in the assessment-for-learning approach, the focus is shifted from
summative to formative assessment, from making judgements to prove that students
have learned to providing feedback to help them learn. Chen notes that ‘in order to
involve everyone in monitoring the self-learning process, self-assessment is often
recommended as an assessment procedure’ (2008, 238). Finally, Lindblom-ylänne,
Pihlajamäki, and Kotkas (2006) note that both self- and peer assessment can be
considered as learning tools, which help students develop skills required for profes-
sional responsibility, judgement and autonomy by getting involved in giving and
receiving feedback.
There seem to be different techniques for self-assessment, one of which is journal
writing (Azorin 1991; Srimavin and Darasawang 2003). Fifty years ago, journals had
no place in language learning. Then, it was believed that language production was best
taught under controlled conditions, and free writing was limited to writing essays on
assigned topics. However, as Brown puts it: ‘Today, journals occupy a prominent role
in a pedagogical model that stresses the importance of self-reflection in the process of
students taking control of their own destiny’ (2004, 260). Marefat (2002) notes that
journal writing provides learners with opportunities to develop an awareness of their
discovery process and to engage in a process of critical thinking.
The present study was an attempt to compare the effectiveness of various assess-
ment techniques, i.e., self-assessment, peer assessment and teacher assessment, in
promoting Iranian EFL learners’ writing performance.
Research questions
In this study, the following research questions were formulated:
Assessment & Evaluation in Higher Education 517
5
Method
Subjects
The present study was conducted with 157 TEFL juniors. Most of these students were
females, in their twenties. They had already passed two writing courses, namely
Basics of Writing and Writing 1. They had a two-hour Writing 2 class every week. To
carry out this study, five intact groups were selected and randomly assigned into five
groups. During the whole semester, one group did journal writing as a self-assessment
technique. In this group, about 12 journals were written by each of the students. The
second group self- assessed their own papers, using a rating scale; the third group had
peer assessment, using the same rating scale; the fourth group implemented both self-
assessment and peer assessment; and finally the fifth group enjoyed only teacher
assessment. In the first, second and third groups, we had teacher assessment as well.
Thus, the groups were named as follows:
All groups were tested on their writing performance at the beginning and end of
the semester. The pre-test and the post-test pivoted around the same issue and were of
the same type.
Instrumentation
A Preliminary English Test (PET 2004) was administered to all participants at the
beginning of the semester. Moreover, two teacher-made writing tests, at the beginning
and at the end of the semester, were given to assess the students’ writing performance.
The rating scale used by the teacher to evaluate the students’ writings was adapted
from Jacobs et al. (1981).
Research design
In this study, as it was not possible to randomly assign subjects into groups, five intact
groups were selected. Yet, a random procedure was used to determine the experimen-
tal and control groups. Since all groups took the same pre- and post-test, the design of
the study was quasi-experimental, non-randomised control group, pre-test–post-test
design. In fact, prior to the treatment, all the participants were given a teacher-made
writing pre-test to ensure comparability of the groups in terms of the variable in ques-
tion. Then, another teacher-made writing test was administered as a post-test to
518
6 P. Birjandi and N. Hadidi Tamjid
measure the effects of the treatments. In both tests, students were supposed to write
about the given topics. It needs to be pointed out that all groups had already completed
Writing 1 in which they had studied Introduction to the Paragraph, The Narrative
Paragraph, The Descriptive Paragraph and The Expository Paragraph. In Writing 2,
the groups covered some lessons about writing essays, and different kinds of it. The
writing pre-test was a one-page paragraph, whereas in the writing post-test, the
students wrote essays.
Procedures
After teaching how to write an essay in three/four sessions, and introducing a
particular kind of essay(genre), there was an attempt to make the assessment of the
students’ own performance an integral part of the course in the experimental
groups.
revisions. Finally, they handed in their writings, which were graded by the teacher
based on the agreed marking scale. After getting feedback from the teacher, the
students compared their assessment with that of the teacher.
3. PA + TA group: In the third group, PA + TA group, the students were divided into
small groups based on the scores of the language proficiency test (PET), which was
administered at the beginning of the semester. In each group, there were students with
both high and low scores. Yet, some of the students’ preferences were also taken into
consideration; some of the students insisted on being with each other in a group, with-
out considering their scores. Again, in this group, the same procedures were followed
to decide on the writing rubric, this time as a peer-marking instrument. The students
assessed the writings of the former students and modelled the application of the instru-
ment. Later, they were assigned to write a composition at home and bring it to class
in the next session. Then, in the class, and within their own peer group, they tried to
evaluate their compositions. Finally, they handed their final version to the teacher with
a mark or comments given by the peers. Similarly, in this group also, the teacher
corrected the final copies, and gave them back to the students. Then, the students
compared teacher assessment with peer assessment.
5. TA group: The last group, TA group, was taught the same material but using the
method typical of English writing courses. In these writing classes, having taught the
main principles of essay writing, the teacher introduces a particular genre in each
session and asks the students to write an essay for the next time. Then, the students
write their essays at home, and in the following session, read them in the class. Here,
the teacher is the only one who assesses the students’ writings, and neither self-
assessment nor peer assessment is employed. The TA group also took the same profi-
ciency and writing exams at the beginning and at the end of the semester as the other
groups.
Preliminary analysis
At the beginning of the semester, a between-groups analysis of variance (ANOVA)
was run in order to check the homogeneity of the groups in terms of their English
language proficiency. To this end, the mean scores of the five groups on the language
520
8 P. Birjandi and N. Hadidi Tamjid
proficiency test (PET) were compared. The result showed that the difference among
the groups was not statistically significant [F(4, 152) = 2.294, p >.05]; therefore, the
groups could be considered as homogeneous in terms of their English language
proficiency (Table 1).
Further, to check for the equality of the groups in terms of their writing perfor-
mance at the pre-intervention stage, their pre-test writing scores were analysed
using a one-way between-groups analysis of variance (ANOVA). There was not a
statistically significant difference at the p < .05 level in writing pre-test scores for
the five groups: F(4, 152) = .931, p = .448. Hence, it was assumed that all the
groups were homogeneous as far as their writing performance was concerned
(Table 2).
At the beginning of the semester, the inter-rater reliability was also calculated
using the Pearson product–moment correlation. As it has been mentioned earlier, two
writing tests were administered at the beginning and end of the term in which the
students were asked to write compositions on familiar topics. In order to measure the
reliability of the teacher assessment, two raters assessed 50 of the students’ writing
tests, randomly selected, at the beginning of the term, using the same rubric. The
degree of association between the ratings provided by two raters was r = .93, which is
significant at the 0.01 level (Table 3).
Data analysis
Given the design of the study, i.e., the quasi-experimental, non-randomised control
group, pre-test–post-test design, it was found appropriate to use independent-samples
t-test to measure the effects of the treatments at post-test. Prior to performing the
intended statistical test, a major assumption for the t-test statistic was checked, i.e.,
normality.
The normal distribution assumption of the writing scores at pre-test and post-test
was tested separately for each group. Specifically, the Kolmogorov–Smirnov test
showed that the distributions were normal. The results are summarised in Table 4.
Once it was found tenable to run the intended statistical test for the obtained data, a
series of independent-samples t-tests was performed in order for the research ques-
tions to be answered. As it was intended to run a series of t-tests, it was considered
appropriate to set a more stringent alpha level (i.e. 0.01) in order to reduce the risk of
a Type I error.
Next, in order to compare the mean scores of the pre- and post-writing tests within
each group, paired-samples t-tests were employed. Table 5 illustrates the results for
the first group, JW + TA group. As the p value is .000, the difference between the
scores of the pre- and post- tests of writing performance is significant in this group.
The same procedure was repeated for other groups. The results are presented in
Table 6 for the second group, SA + TA; in Table 7 for the third group, PA + TA; in
Table 8 for the fourth group, SA + PA; and in Table 9 for the fifth group, TA.
As it is illustrated in these tables, the p-value is smaller than 0.05 in all groups,
which indicates a significant difference in the pre- and post-tests of writing perfor-
mance in all the groups. However, to examine the mean score of the groups on the
post-tests of writing performance, a one-way ANOVA was conducted. Table 10
shows the results. The values F(4) = 5.86 and p < .05 show that the difference is
significant.
Later, in order to determine which groups are significantly different from the
others, a post-hoc analysis was done. Since there were five groups, there were 25
possible comparisons among the means. Scheffe analyses were used in this. Table 11
illustrates the results.
Paired differences
95% confidence interval
of the difference
Mean Std. deviation Std. error mean Lower Upper t df Sig. (2-tailed)
Pair 1 Writing ability, Pre-test – −5.5667 7.31877 1.33622 −8.2995 −2.8338 −4.166 29 .000
Writing ability, post-test
P. Birjandi and N. Hadidi Tamjid
Discussion
The primary objective of this study was to investigate the role of self-, peer, and
teacher assessment in promoting EFL learners’ writing performance. Regarding the
first research question, Does the introduction of continuous journal writing improve
learners’ writing performance, as compared with teacher assessment?, there was no
statistically significant difference in writing scores for the JW + TA group (M = 72.00,
SD = 13.92) and the TA group (M = 71.66, SD = 10.20), i.e., t(58) = .106, p = .916
(two-tailed). The magnitude of the difference in the means (mean difference = 0.33,
95% CI = –5.97–6.64) was quite small (Table 12).
Concerning the second research question, Does self-assessment improve learners’
writing performance, as compared with teacher assessment?, a statistically significant
difference was found between the SA + TA group (M = 77.67, SD = 8.55) and the TA
group (M = 71.66, SD = 10.20) in the post-test at 0.05 level, i.e., t(65) = 2.62, p < 0.05
(Table 13). The magnitude of the differences in the means (mean difference = 6.00,
95% CI = 1.43–10.58) was quite large. The effect size, calculated using eta squared,
was moderate, i.e., .09. Expressed in percentage, only 9% of the variance in writing
performance of the experimental group can be accounted for by the treatment (SA +
TA) that they received. The significant difference in the scores between the SA + TA
group and the TA group confirms the effectiveness of self-assessment with specific
criteria in improving writing performance. This is in line with Wei’s (2007) study,
who investigated the effectiveness of self-assessment with specific criteria in improv-
ing graduate-level ESL (English as Second Language) students’ writing performance.
Table 11. Scheffe analysis of writing performance post-test for all groups.
95% confidence interval
(I) Comparison groups (J) Comparison groups Mean difference (I – J) Std. error Sig. Lower bound Upper bound
JW + TA SA + TA −5.6757 2.84415 .412 −14.5450 3.1936
PA + TA −9.6129 2.96484 .037 −18.8586 −.3672
SA + PA 2.6897 3.01470 .939 −6.7115 12.0908
TA .3333 2.98904 1.000 −8.9878 9.6545
SA + TA JW + TA 5.6757 2.84415 .412 −3.1936 14.5450
PA + TA −3.9372 2.81871 .745 −12.7272 4.8527
SA + PA 8.3653 2.87111 .081 −.5880 17.3187
TA 6.0090 2.84415 .351 −2.8603 14.8783
PA + TA JW + TA 9.6129* 2.96484 .037 .3672 18.8586
SA + TA 3.9372 2.81871 .745 −4.8527 12.7272
SA + PA 12.3026* 2.99070 .003 2.9762 21.6289
TA 9.9462* 2.96484 .027 .7006 19.1919
SA + PA JW + TA −2.6897 3.01470 .939 −12.0908 6.7115
SA + TA −8.3653 2.87111 .081 −17.3187 .5880
PA + TA −12.3026* 2.99070 .003 −21.6289 −2.9762
TA −2.3563 3.01470 .962 −11.7575 7.0448
TA JW + TA −.3333 2.98904 1.000 −9.6545 8.9878
SA + TA −6.0090 2.84415 .351 −14.8783 2.8603
PA + TA −9.9462 2.96484 .027 −19.1919 −.7006
SA + PA 2.3563 3.01470 .962 −7.0448 11.7575
Note: *The mean difference is significant at the .05 level.
Assessment & Evaluation in Higher Education
13
525
Table 12. Independent-samples test (JW + TA and TA).
14
526
Levene’s test
for equality of
variances t-test for equality of means
95% confidence
interval of the
difference
F Sig. t df Sig. (2-tailed) Mean difference Std. error difference Lower Upper
Writing ability, Equal variances 1.274 .264 .106 58 .916 .3333 3.15184 −5.97576 6.64243
post-test assumed
Equal variances .106 53.168 .916 .3333 3.15184 −5.98799 6.65466
not assumed
P. Birjandi and N. Hadidi Tamjid
In this study, the experimental group outperformed the control group. Norris (2000,
cited in Nedzinskaite, Svencioniene, and Zavistanaviciene 2008) also emphasise the
usefulness of alternative assessment, including self-assessment. The results here
confirm Afflerbach and Kapinus (1993), who maintain that self-assessment helps
students to develop as readers and writers. Yeganehfar (2000) also argues that as far
as responding to the written errors of Iranian EFL students is concerned, students’
self-correction works much better than teacher correction.
Likewise, for the third research question, Does peer assessment improve learn-
ers’ writing performance, as compared with teacher assessment?, an independent-
samples t-test was conducted to compare the writing scores between the PA + TA
group and the TA group. The result showed that there was a statistically significant
difference in scores between the PA + TA group (M = 81.61, SD = 10.27) and the
TA group (M = 71.66, SD = 10.20), i.e., t(59) = 3.79, p = .00. In addition, the magni-
tude of the differences in the means (mean difference = 9.94, 95% CI = 4.96–15.19)
was noted to be considerably large (Table 14). The effect size index, based on eta
squared, was considered quite large, i.e., .19. To put it differently, 19% of the vari-
ance in writing performance of the experimental group can be explained by the treat-
ment (PA + TA) that they received. The improvement in this group (PA + TA) is in
line with Vygotsky’s (1978) theory on language learning, i.e., learning is a cognitive
activity which takes place in social interaction. In addition, the results of this group
(PA + TA) specifically confirm researchers such as Min (2005), Bruffee (1984, cited
in Min 2005), and Tsui and Ng (2000), who maintain that writing is a collaborative
learning activity that can be best learned by interacting with peers. Moreover, the
improvement in the PA + TA group is in agreement with the results of Ganji’s (2006)
study, which revealed that the two experimental groups of self-correction and peer
correction outperformed the control group, and that between these two groups, peer
correction had a stronger effect on the writing performance of students. Ganji main-
tains that peer correction, due to the need for more cooperation and activity on the
part of the learners while analysing and discussing the errors in most details, seems
to have a more lasting effect on their writing performance. Finally, Mobaraki’s
(1995) study suggests that in writing classes, the group employing peer response
technique outperformed the control group, in which there was only teacher correc-
tion, in terms of basic organisation and grammaticality. Mobaraki notes that by work-
ing together, peers both receive and give advice, both ask and answer questions, and
assume the role of both novice and expert, and this social interaction supports
learning.
Considering the fourth research question, Does self-assessment accompanied by
peer assessment improves learners’ writing performance, as compared with teacher
assessment?, no statistically significant difference was found in writing scores
between the SA + PA group (M = 69.31, SD = 14.54) and the TA group (M = 71.66,
SD = 10.20), i.e., t(57) = −.72, p = .47 (two-tailed). The magnitude of the differences
in the means (mean difference = −2.35, 95% CI = −8.88–4.17) was quite small, as
shown in Table 15. It should be noted that this minimal change in mean scores was
for the comparison group (TA group). The fact that in the fourth group (SA + PA),
there was not significant improvement in the writing performance, compared with the
control group (TA), may suggest that getting feedback from the teacher is one of the
main factors in developing writing performance, at least in the context of Iranian
EFL learners. This may be due to the fact that in our educational system, students in
general tend to rely on their teachers and shy away from direct involvement in
Table 14. Independent-samples test (PA + TA and TA).
16
528
Levene’s test
for equality of
variances t-test for equality of means
95% confidence
interval of the
difference
F Sig. t df Sig. (2-tailed) Mean difference Std. error difference Lower Upper
Writing ability, Equal variances .053 .819 3.793 59 .000 9.9462 2.62200 4.69962 15.19285
post-test assumed
Equal variances 3.794 58.958 .000 9.9462 2.62171 4.70014 15.19233
not assumed
P. Birjandi and N. Hadidi Tamjid
assessment. In typical writing classes of Iran, the teacher collects the papers, corrects
them and gives them back to the students. Of course, the result in this group is in line
with studies in which the role of feedback in self-assessment is emphasised (e.g.
Boston 2002; Taras 2001, 2002, 2003, cited in Oscarson 2009).Yet, it should be
mentioned that teacher feedback is not the only source students can benefit from.
When all errors are solely corrected by the teacher and the student has no role in the
whole process but is a passive reader of the teacher’s red-ink corrections, there is a
strong likelihood that the student keeps repeating the same mistakes over and over
again. This can be due to the fact that when all the work is done by the teacher, as the
absolute authority of the class, the students feel no need, or better to say, get no
chance to think about their mistakes. As it was shown in the present study, teacher
assessment can be most beneficial if it is accompanied with self-assessment or peer
assessment. In this way, the students get the chance to revise their own texts and
have better end results. Thus, they are not simply passive recipients of the teacher’s
feedback but active participants in the process. Table 16 shows the point. As illus-
trated in Table 16, the mean difference in the writing scores between pre-test and
post-test is maximum for the second group (SA + TA) and the third group (PA +
TA), in which self-assessment and peer assessment were employed together with
teacher assessment. Through self- and peer assessment, the students seem to be
encouraged to look critically and analytically at their own writing and take more
responsibility for what they write. Thus, it can be claimed that both self- and peer
assessment, accompanied by teacher assessment, have the potential to improve EFL
learners’ writing performance.
Conclusion
As has been mentioned, the present study supports the findings of other researches
regarding alternative assessment, e.g., Chen (2008), who maintains that students’ abil-
ities are constructed and reconstructed while they are reflecting on what they have
achieved, with the help of their peers or teachers. As the research findings in this study
prove the significance of self-assessment and peer assessment in promoting learners’
writing performance, it seems beneficial to incorporate self-assessment training into
EFL classes in general, and writing classes in particular. It seems that providing the
learners with the opportunity to self-assess will help them improve their metacogni-
tion, which in turn leads to better thinking and learning. Integrating a dose of self-/
peer assessment into EFL courses will enhance learners’ involvement in learning.
Through self-/peer assessment, the students will compare their work over time,
discuss their strategies for writing papers, analyse their mistakes and judge their
progress. Moreover, the feedback from the teacher functions as a comprehensible
530
18 P. Birjandi and N. Hadidi Tamjid
input and can increase the learners’ output. It is recommended that teachers who value
independent thinking and autonomous learning consider self- and peer assessment as
part of their classroom activities.
The present study focused on writing performance in general. Future research is
needed to examine how different aspects of writing performance can be influenced by
self-assessment and peer assessment. In particular, it would be valuable to examine
whether accuracy or fluency of writing is more improved using self- and peer assess-
ment techniques. In addition, participants in this study were 157 university students,
which implies follow-up research with more participants.
Additionally, self-assessment and peer assessment seem to be highly advantageous
in teacher-centred classes in EFL context such as Iran. In fact self- or peer assessment
seems to be a logical outcome of increased interest in learner-centred language teach-
ing and self-directed language learning. Reviewing the literature, one comes to the
conclusion that language learning is enhanced if the learner takes initiative in the
language learning and assessment, and if both the language teacher and the language
learner share responsibility for the management of language learning and assessment.
Notes on contributors
Parviz Birjandi is a full professor holding an MA in applied linguistics from the Colorado State
University and a PhD in English education; minor: Research methods and statistics, from the
University of Colorado. He is currently the dean of the College of Foreign Languages and
Persian Literature in the Islamic Azad University, Science and Research Branch. He has
published 15 articles in the area of TEFL, and he is the author of English textbooks for high
school and pre-university levels, used nation wide, five university textbooks and four practice
textbooks.
Nasrin Hadidi Tamjid is a PhD candidate of TEFL at the Islamic Azad University/Science and
Research Branch. She has an MA in teaching English from the University of Tabriz. She is
currently finishing her PhD in the Islamic Azad University/Science and Research Branch. She
is a lecturer at the Islamic Azad University/Tabriz Branch and an official translator to the
justice administration. She has published and presented papers in international conferences and
journals.
References
Afflerbach, P., and B. Kapinus. 1993. The balancing act. The Reading Teacher, 47: 62–4.
Akyel, A. 1994. First language use in EFL writing: Planning in Turkish vs. planning in
English. International Journal of Applied Linguistics 4, no. 2: 169–96.
Azorín, M.J.M. 1991. Self-assessment in second language teaching. Revista Alicantina de
Estudios Ingleses 4: 91–101.
Bachman, L.F. 2000. Modern language testing at the turn of the century: Assuring that what
we count counts. Language Testing 17, no. 1: 1–42.
Boston, C. 2002. The concept of formative assessment. Practical Assessment, Research &
Evaluation 8, no. 9. http://PAREonline.net/getvn.asp?v=8&n=9 (accessed September 19,
2008).
Brindley, G. 2001. Assessment. In The Cambridge guide to teaching English to speakers of
other languages, ed. R. Carter and D. Nunan, 137–43. Cambridge: Cambridge University
Press.
Brown, H.D. 2004. Language assessment: Principles and classroom practices. New York:
Pearson Education.
Carson, J.G. 2001. Second language writing and second language acquisition. In On second
language writing, ed. T. Silva and P.K. Matsuda, 191–200. Mahwah, NJ: Lawrence
Erlbaum.
Assessment & Evaluation in Higher Education 19
531
Chen, Y.M. 2008. Learning to self-assess oral performance in English: A longitudinal case
study. Language Teaching Research 12, no. 2: 235–62. http:ltr.sagepub.com/cgi/content/
abstract/12/2/235 (accessed May 8, 2008).
Clachar, A. 1999. It’s not just cognition: The effect of emotion on multiple-level discourse
processing in second-language writing. Language Sciences 21, no. 1: 31–60.
Cumming, A. 1998. Theoretical perspectives on writing. Annual Review of Applied Linguis-
tics 18: 61–78.
Cumming, A. 2001. Learning to write in a second language: Two decades of research. Inter-
national Journal of English Studies 1, no. 2: 1–23.
Cumming, A., and J.D. Mellow. 1996. An investigation into the validity of written indicators
of second language proficiency. In Validation in language testing, ed. A. Cumming and
R. Berwick, 72–93. Clevedon: Multilingual Matters.
Cobb, P. 1994. Where is the mind? Constructivist and sociocultural perspectives on mathe-
matical development. Educational Researcher 23, no. 7: 13–20. http://edr.sagepub.com/
content/23/7/13.abstract (accessed August 11, 2008).
de Minize, M.C.R. 2004. Subjective and objective causality in science: A topic for attribution
theory? Interdisciplinaria (Numero Especial): 140–59.
Ellis, R. 1994. The study of second language acquisition. Oxford: Oxford University Press.
Ganji, M. 2006. Self-correction, peer-correction, and writing performance. MA thesis,
Allameh Tabatabaii University, Tehran.
Hall, C. 1990. Managing the complexity of revising across languages. TESOL Quarterly 24,
no. 1: 43–60.
Hamayan, E.V. 1995. Approaches to alternative assessment. Annual Review of Applied
Linguistics 15: 212–26.
Hinkel, E. 2004. Teaching academic ESL writing: Practical techniques in vocabulary and
grammar. Mahwah, NJ: Lawrence Erlbaum.
Ishikawa, S. 1995. Objective measurement of low-proficiency ESL narrative writing. Journal
of Second Language Writing 4, no. 1: 51–69.
Jacobs, H.L., S.A. Zinkgraf, D.R. Wormuth, V.F. Hartfiel, and J.B. Hughey. 1981. Testing
ESL composition: A practical approach. Rowley, MA: Newbury House.
Kim, J.S. 2005. The effects of a constructivist teaching approach on student academic achieve-
ment, self-concept, and learning strategies. Asia Pacific Education Review 6, no. 1: 7–19.
Lee, S.-Y. 2005. Facilitating and inhibiting factors in English as a foreign language writing
performance: A model testing with structural equation modeling. Language Learning 55,
no. 2: 335–74.
Lindblom-ylänne, S., H. Pihlajamäki, and T. Kotkas. 2006. Self-, peer- and teacher-assessment
of student essays. Active Learning in Higher Education 7, no. 1: 51–62.
Marefat, F. 2002. The impact of diary analysis on teaching/learning writing. RELC Journal
33, no. 1: 101–21.
Matsumoto, K. 1995. Research paper writing strategies of professional Japanese EFL writers.
TESL Canada Journal 13, no. 1: 17–27.
Matsuno, S. 2009. Self-, peer-, and teacher-assessments in Japanese university EFL writing
classrooms. Language Testing 29, no. 1: 75–100.
Min, H.T. 2005. Training students to become successful peer reviewers. System 33: 293–308.
Mobaraki, M. 1995. Improving writing through peer response technique. MA thesis, Allameh
Tabatabaii University, Tehran.
Mu, C., and S. Carrington. 2007. An investigation of three Chinese students’ English writing
strategies. Teaching English as a Second Language – Electronic Journal 11, no. 1.
Nedzinskaite, I., D. Svencioniene, and D. Zavistanaviciene. 2006. Achievements in language
learning through students’ self-assessment. Studies about Languages 8: 84–7. http://
www.ceeol.com/aspx/issuedetails.aspx?issueid=b4c94cae-58a5-447e-8785-b256b6ea1e30
&articleId=337ad699-f307-4044-8f10-3bc8f9447378 (accessed February 24, 2008).
Nelson, C.P., and M.K. Kim. 2001. Contradictions, appropriation, and transformation: An
activity theory approach to L2 writing and classroom practices. Texas Papers in Foreign
Language Education 6, no. 1: 37–62.
Oscarson, A.D. 2005. The use of self-assessment when learning English as a foreign
language. Paper presented at the annual meeting of the American Educational Research
Association, April 11–15, in Montreal.
532
20 P. Birjandi and N. Hadidi Tamjid