SV23-24.106 Nhu Thi Xuan Mai

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 42

TRƯỜNG ĐẠI HỌC HÀNG HẢI VIỆT NAM

KHOA NGOẠI NGỮ

BÁO CÁO TỔNG KẾT

ĐỀ TÀI NGHIÊN CỨU KHOA HỌC SINH VIÊN


NĂM HỌC 2023-2024

ROLES OF PEER ASSESSMENT IN PROMOTING STUDENTS’


PUBLIC SPEAKING PERFORMANCES IN ENGLISH AT VMU.

(Vai trò của đánh giá đồng đẳng trong việc nâng cao khả năng thuyết trình
đám đông bằng tiếng Anh của sinh viên trường ĐHHH Việt Nam)

Thuộc nhóm ngành : Ngôn Ngữ Anh


Họ và tên:

Nhữ Thị Xuân Mai Năm thứ 2/Tổng số năm đào tạo 4

Nguyễn Hồng Anh Năm thứ 2/Tổng số năm đào tạo 4

Giảng viên hướng dẫn: ThS. Đoàn Văn Huân

ThS. Nguyễn Hồng Ánh

HẢI PHÒNG 2024

0
ABSTRACT

This case study on peer assessment (PA) aimed to gauge student opinions regarding a
student-centered assessment approach and its effectiveness in enhancing learning.
Conducted within a Public Speaking course at Vietnam Maritime University, the study
outlines a PA framework where 30% of students' final grades were derived from peer
assessment scores of their oral presentations. The course comprised 55 third-year female
students across two classes. Data collection involved evaluating completed PA rating
sheets for two presentations per student and conducting a survey at the course's
conclusion. Overall, students expressed positive perspectives towards peer assessment,
indicating its beneficial impact on learning. Furthermore, the analysis revealed alignment
between student views and existing PA literature, underscoring the effectiveness of the
approach despite the specific context of the study..

1
TABLE OF CONTENTS

2
CHAPTER 1. INTRODUCTION

1.1. Rationale for the research

The perspective of students holds significant importance as it directly impacts their


learning experience. For students, classroom assessment information isn't merely data
'about' themselves; instead, it becomes an integral part of their learning journey. It shapes
the lessons they are expected to grasp, influences their relationship with the teacher and
the subject matter, and plays a role in their interactions with peers (Brookhart, 2003, p. 6).

Teachers' decisions regarding the assessment frameworks they employ in their courses
can profoundly affect student engagement with the subject matter and subsequent learning
outcomes. Despite the profound impact assessment practices can have on learning, they
are often implemented without much or any input from students themselves (Stefani,
1998). Peer assessment (PA) has emerged as a practice with potentially significant
benefits in terms of enhancing learning outcomes and is increasingly being utilized in
higher education to actively involve students in the assessment process (Race, Brown &
Smith, 2005).

Peer assessment (PA) is described as "an arrangement in which individuals evaluate the
products or outcomes of learning of peers of similar status" (Topping, 1998, p. 250).
Integrating a PA element into course assessments can enhance student engagement,
accountability, and performance. It helps establish clearer course structures, directs
attention towards skills and learning objectives, and facilitates increased feedback
(Weaver & Cottrell, 1986). PA is crucial in formative assessment, involving students in
evaluating their peers' work, and with careful implementation, can also be utilized in
summative assessment. Beyond being a method of evaluating student learning outcomes,
PA is recognized as a learning process in itself. While the benefits of PA in classroom
assessment are widely acknowledged, there is limited understanding of student
perspectives on both assessing and being assessed by peers.

Case studies provide researchers with the opportunity to examine a particular environment
and its participants within their specific context (Gay & Airasian, 2003). The study
discussed here concentrates on student perspectives regarding a peer assessment (PA)

3
framework implemented within the specific setting of public speaking courses known as
"Advanced Presentation" or "Speech Communication" at VMU. Within this case study,
the scores from peer assessments of oral presentations were combined to form a portion of
students' final grades for the course, accounting for 30% of their overall grade.

1.2. Research aim and research questions

The aim of this investigation is to examine tertiary English as a Foreign Language (EFL)
students' perceptions of peer assessment (PA) and its influence on their learning
experiences. Drawing primarily from student feedback collected through an end-of-course
PA survey, the case study focuses on two main inquiries:

1. How do students perceive peer assessment, particularly the framework


implemented in the public speaking course?
2. Do students believe that the PA process, including scoring peer presentations and
providing/receiving peer feedback, contributed to their development as more
effective public speakers?

A student's presentation in front of peers constitutes a public performance—an exhibition


of skills or talents before an audience. The evaluation of these presentations represents a
form of performance-based assessment, wherein students demonstrate specific skills and
competencies (Stiggins, 1987). In this case study, students showcase their comprehension
and application of effective public speaking techniques to an audience. Basturk (2008)
highlights that in performance assessments, students' roles shift from passive learners to
active participants, enabling instruction and assessment to be integrated in a manner
traditional approaches often overlook (p. 13).

1.3. Scope of the study

Peer assessment (PA) holds a significant position within the framework of assessment for
learning (AfL), emphasizing its potential to foster student learning. In AfL, assessment
processes and outcomes are transformed into instructional interventions aimed at
enhancing, rather than merely monitoring, student learning, motivation, and confidence
(Stiggins, 2008). Peer assessment is particularly valued in this context because it

4
encourages students to approach their work more diligently, amplifies their involvement
in the learning process, and enhances their learning outcomes (Black, Harrison, Lee,
Marshall & Wiliam, 2003). Additionally, PA is seen as a beneficial practice for learning
because it empowers students to assume the roles of both teachers and evaluators,
facilitating a deeper understanding of assessment criteria through comparison with their
peers' work (Black & Wiliam, 2006).

However, it's crucial to acknowledge that effective implementation of learner-centered


assessment practices like peer assessment relies on students being supported by teachers
to develop the necessary skills. Black et al. (2003) stress that the primary objective of peer
(and self) assessment should not merely be assigning levels and grades, but rather
identifying learning needs and devising strategies for improvement. This concept aligns
with the notion of "learning by assessing" (Toppings, 1998, p. 254).

Nevertheless, in practice, the integration of formative assessment practices like peer


assessment often encounters challenges due to the coexistence of summative assessment
requirements. Summative assessment, which evaluates what students have learned at the
conclusion of a teaching period, contrasts with formative assessments that provide
ongoing feedback to guide teaching and learning (McTighe & O’Connor, 2005). While
PA is primarily considered a formative practice, aiming to support students in their
ongoing growth, it also assumes a summative role in this case study by contributing to
final course grades through the aggregation of student-generated scores of their peers'
performances. However, this amalgamation of formative and summative purposes may
potentially undermine the intended learning benefits, as grades and marks could be
perceived as threats to valid formative assessment (Stobart, 2006).

Some scholars advocate for the limitation of peer and self-assessment in summative
contexts to preserve the integrity of formative assessment principles (Noonan & Duncan,
2005). Nevertheless, PA need not be entirely excluded from summative assessments. If
students can glean insights and learn from summative assessments, these evaluations can
effectively function in a formative capacity (Yorke, 2003). Topping's (1998) review of PA
literature suggests that even simple quantitative feedback can yield positive formative

5
effects, leading to improved scores/grades and enhancing students' subjective perceptions.

Ultimately, assessments, including summative ones, can be designed to align with the
principles of formative assessment and assessment for learning, providing feedback to
support students in progressing from their current level of understanding to where they
need to be (Kennedy, Kin Sang, Wai-ming & Kwan Fok, 2006). Thus, the PA framework
discussed in this case study aimed to fulfill a dual role: serving as both a formative
learning tool and a summative evaluation instrument.

6
CHAPTER 2. LITERATURE REVIEW

An extensive body of research related to the study of PA exists, and a number of reviews
and analyses of PA are available (see, for example, Topping 1998; Falchikov &
Goldfinch, 2000; Ballantyne, Hughes & Mylonas, 2002; Bloxham & West, 2004; Deakin-
Crick, Sebba, Harlen, Guoxing & Lawson, 2005). Most PA literature is focused on two
issues in particular: evaluating student contributions to group assignments or the
reliability and validity of such types of assessment (Ballantyne et al., 2002). Student
perceptions and experiences with peer-assessment have been little reported in the
extensive PA literature. At the turn of the century, Hanrahan and Issacs (2001) noted that
there is little in the published literature on how PA and self-assessment are viewed by
students, and called for further investigations across subject areas noting that the case-
based literature on PA is “still alarmingly sparse” (p. 67). While PA has been identified as
a key element in formative assessment, there is little research showing the extent to which
teachers’ classroom practices utilize this student-centered strategy (Noonan & Duncan,
2005). More recently, the call for further research of student views of PA is echoed in Vu
and Alba (2007). Otoshi and Heffernan (2008) note that, with regard to ESL/EFL
contexts, PA has not been well-researched, with most of the work having been done in
peer assessment of writing. This review of previous PA research will: 1) briefly
summarize literature findings of teacher and student views of using PA, 2) address the
issue of using PA scores for summative purposes, and 3) present, in chronological
fashion, some relevant research related to the primary issue of this case study - student
perspectives of PA.

2.1. Brief overview of teacher/student views on PA

Using a criterion of desired performance, peer assessment requires that students closely
scrutinize the work of their peers (Vu & Alba, 2007). From the teachers’ perspective,
some of the main advantages and disadvantages of using PA that have been identified and
described in the peer assessment literature are presented below in Table 1.

Table 1. Potential Advantages and Disadvantages of Peer Assessment (PA)

7
Advantages:

1. Fosters student autonomy, responsibility, and engagement.


2. Encourages critical analysis of peers' work, moving beyond mere grading.
3. Clarifies assessment criteria for students.
4. Provides a broader range of feedback to students.
5. Mimics real-world scenarios where judgments are made by a group.
6. Reduces the workload for instructors in terms of marking.
7. Allows for multiple groups to operate simultaneously, as not all groups require
the presence of the instructor.

Disadvantages:

1. Students may lack the ability to evaluate their peers accurately.


2. Students may not take the process seriously, allowing personal biases or social
dynamics to influence their evaluations.
3. Some students may dislike peer assessment due to concerns about potential
discrimination or misunderstanding.
4. Without intervention from the instructor, students may inadvertently provide
incorrect or misleading feedback to their peers.

In his comprehensive review of 31 peer assessment (PA) studies, Topping (1998) argues
that despite potential pitfalls and challenges in assessment quality, PA remains a valuable
tool. He contends that the drawbacks associated with PA are offset by the increased
frequency, volume, and immediacy of feedback provided by peers compared to what
instructors can offer alone.

Struyven, Dochy, and Janssens (2005), in their examination of student perceptions of


assessment in higher education, emphasize the significant role that students' views about
assessment methods play in shaping their approach to learning. Concerns regarding PA
identified in the literature include students' awareness of their own limitations in subject

8
areas, doubts about objectivity, perceptions of unfairness, social dynamics such as
friendship or hostility, and the belief that assessment is primarily the responsibility of
teachers (Cheng & Warren, 1997; Falchikov, 2003).

2.2. Using peer assessment for summative grading

The use of peer assessment for summative grading remains a contentious issue in the
field. While there is general agreement on the potential of PA to enhance learning,
opinions differ on whether peer assessments should significantly contribute to student
grades (Magin & Helmore, 2001). Table 2 outlines arguments from the literature both for
and against using PA as part of summative grades:

Table 2. Arguments for and against the use of PA for summative grades

Arguments against using PA as part of summative grades:

● This practice could compromise the primary pedagogical intention of PA.


● Peer assessments may be too inaccurate for grading purposes.
● Concerns about reliability and validity, such as students being "poor judges" of
effective communication skills, potential bias influencing grading, and variability
in marking standards among peer assessors.
● Universities require confidence in their assessment practices for high-stakes
certification, which may not align with relying on inexperienced assessors.

Arguments supporting using PA as part of summative grades:

● Knowing that peer grades contribute to final grades may motivate students to
take the process more seriously.
● PA used solely formatively may not be taken seriously by students.
● This practice can foster student autonomy and empower them to make
consequential judgments.
● In some contexts, obstacles to obtaining fair and valid peer assessments may be
minimal or surmountable.
● Peer assessments may outperform solely teacher assessments in certain contexts,
such as oral presentations and audience communication.

9
The decision to incorporate peer assessment (PA) as a significant component of
summative grading in this case study reflects the acknowledgment of the nuanced nature
of assessment issues. This choice aligns with the argument supporting the inclusion of
peer scores into final grades, echoing the views expressed by Vu and Alba (2007). They
argue that excluding marks from peer assessment may limit its positive impact on student
learning and development. By assigning marks, students are compelled to take greater
responsibility and engage in thorough evaluation of their peers' work, leading to a deeper
understanding of the subject matter.

Addressing the issue of ensuring the meaningfulness of PA marks, Race et al. (2005)
advocate for PA to account for something in the final grade, even if it is a small portion.
This aligns with the aim of encouraging students to take PA seriously and carefully,
thereby promoting learning about the course content and objectives through the process of
'learning by assessing.'

Student perceptions of peer assessment vary, as highlighted by Topping (1998) and


Hanrahan and Issacs (2001). While students appreciate the fairness and formative
usefulness of detailed peer feedback, they also express concerns about potential social
embarrassment and cognitive strain associated with PA. Despite these challenges,
Hanrahan and Issacs (2001) report generally positive effects of peer assessment on
students' learning experiences.

Overall, the decision to incorporate peer assessment into the summative grading process
in this case study reflects a commitment to leveraging its potential benefits for student
learning and development, while acknowledging and addressing the associated challenges
and concerns.

The studies conducted by Ballantyne et al. (2002), McLaughlin and Simpson (2004),
Bloxham and West (2004), and Nigel and Pope (2005) provide valuable insights into
student perceptions of peer assessment (PA) across various educational contexts.

Ballantyne et al. (2002) conducted a comprehensive study on PA implementation in large


classes at the University of Technology, Australia. Despite some difficulties associated

10
with using PA in larger class settings, students perceived several benefits of participating
in PA. They appreciated how PA encouraged self-reflection, skill development relevant to
future employment, and critical comparison of their own work with peers. However,
students also expressed concerns about questioning peers' competency in marking,
fairness issues, and the perceived time-consuming nature of PA. Ballantyne et al. (2002)
suggested allocating a reasonable percentage of marks (10-15%) for PA to enhance
student engagement and commitment to the task, emphasizing the importance of clearly
articulating assessment criteria.

McLaughlin and Simpson (2004) explored first-year university students' perspectives on


PA in a construction management course. The study found overwhelming support for PA
among students, who viewed it as a positive assessment experience. Students reported
learning significantly from assessing peers' work and expressed a preference for PA over
lecturer-only assessment, aligning with the idea that the assessment process should serve
as a learning tool.

Bloxham and West (2004) investigated PA among sports studies students in the U.K.
They incentivized serious engagement with PA by awarding 25% of assignment marks for
the quality of peer marking. Despite some students expressing concerns about grading
inconsistencies among peers, the majority viewed PA as a positive experience that
enhanced their understanding of the assessment process.

Nigel and Pope (2005) focused on the stress associated with peer (and self) assessment,
particularly in the context of group projects in an undergraduate research methods class.
They found that PA requirements could induce stress among students, stemming from
inexperience, fear of hurting or being hurt by peers, or the pressure of summative tasks.
Despite this, peer assessment ultimately led to improved performance in summative tasks.

Overall, these studies highlight the multifaceted nature of student perceptions of PA,
encompassing both positive experiences and challenges. They underscore the importance
of clear communication of assessment criteria, thoughtful implementation of PA
processes, and recognition of the potential stressors associated with peer assessment.

11
Longman et al. (2005) conducted a study focusing on peer assessment of oral
presentations in environmental or biological courses in the UK. They compared marks
awarded by both students and tutors and found that a firm understanding of assessment
criteria was associated with greater validity of peer assessment. Despite the complexity of
managing mostly inexperienced assessors, Longman et al. (2005) concluded that the
benefits of peer assessment, including learner inclusion and empowerment, outweighed
any differences between peer and tutor marks. However, they cautioned against using
student marks for purposes other than formative assessment due to the possibility of bias
in peer assessment.

Wen and Tsai (2006) investigated university students' views on peer assessment,
particularly the online variety, in Taiwan. They found that students generally favored peer
assessment as it allowed them to compare their work with classmates. However, students
were less enthusiastic about receiving criticism from peers and expressed a lack of self-
confidence in assessing their classmates. Overall, Wen and Tsai (2006) concluded that
university students had positive attitudes toward peer assessment activities.

Vu and Alba (2007) described students' experiences with peer assessment in a


professional course at an Australian university. The peer assessment component involved
students assessing a viva voce course component, which consisted of a student interview
with the teacher. The authors reported that peer assessment had a positive impact on
students' learning experiences, with most students acknowledging learning from both the
process and their peers. They highlighted the potential of peer assessment to promote
learning in various settings.

Papinczak, Young, and Groves (2007) conducted a qualitative study of peer assessment in
problem-based learning with freshman medical students at an Australian university. The
study found that students held a critical view of assessment by peers, expressing concerns
about potential biases and lack of honesty or friendship marking in peer assessments.

Overall, the literature on student perceptions of peer assessment highlights the positive
benefits of integrating it into a course's assessment framework and its impact on student
learning. However, students are also aware of the potential disadvantages of peer

12
assessment, such as biases and discomfort in both giving and receiving feedback. While
there are some concerns, the literature generally supports the inclusion of peer assessment
as a valuable learning tool.

This case study responds to the call for additional research on peer assessment,
particularly in the context of a Public Speaking course at a Vietnam Maritime university.
Following the example set by previous research, the peer assessment process in this study
was designed to both promote and evaluate student learning.

13
CHAPTER 3. METHODOLOGY

3.1. Context and course

Vietnam Maritime University (VMU) is a university in the North of Vietnam. The Faculty
of Foreign studies has a one-semester course for students entitled Speech communication
or Advanced Presentation. The course discussed in this case study ran from September to
December and comprised two classes of second and third-year students, totaling 55
individuals, aged around 20-21 years old. Age considerations might affect how students
perceive assessment methods. With a few years of tertiary education experience and
exposure to various assessment approaches, students could approach peer assessment in
the Public Speaking course differently. Their age and maturity level may enable them to
critically evaluate assessment practices, while their prior experiences could shape their
attitudes towards peer assessment. Additionally, their sense of autonomy and
responsibility for learning might influence their engagement with peer assessment. Given
these factors, instructors should consider individual differences and ensure that the peer
assessment process is designed to meet students' needs and expectations, fostering
effective communication and providing opportunities for feedback.

The main educational goals of the public speaking course centered on enhancing students'
abilities to plan, structure, and deliver compelling presentations, complemented by the use
of visual aids such as computerized slideshows. The course, taught by the author in two
separate classes, required each student to deliver two primary presentations over the
duration of the semester. These presentations, intended to inform the audience, were to be
based on news stories from the media, chosen by the students themselves. Students were
encouraged to select topics that resonated with both their personal interests and the
interests of their audience.

3.2. Materials and procedures

Each of the two public speaking classes convened weekly for 90-minute sessions, totaling
approximately 15 meetings throughout the semester. Class activities encompassed various
aspects of effective public speaking, instruction on creating well-designed computer
presentations, and selecting news stories for presentation content. Students were grouped

14
into planning teams to assist in preparing for their mid-term and final presentations. These
groups, consisting of four students each, engaged in activities such as discussing
presentation topics, reporting progress, delivering brief presentations (2-3 minutes) on
their chosen topics, and receiving feedback from peers. About four classes out of the
fifteen, occurring midway and toward the end of the semester, involved students
presenting their speeches and participating in peer assessment, alongside evaluations from
the instructor. There was a gap of approximately six weeks between the mid-term and
final presentations. The initial class of the semester included an orientation to peer
assessment, where students were introduced to its concept, provided with its purpose in
the course, and acquainted with the assessment criteria to be used by both peers and the
instructor for scoring presentations.

Instead of utilizing a textbook for this relatively brief 14-week course, the foundational
theories and practical applications were primarily drawn from a journal article authored
by Yamashiro and Johnson in 1997, titled "Public Speaking in EFL: Elements of Course
Design." This article delineated a Public Speaking course developed and implemented by
the authors at secondary and tertiary education levels in Viet Nam. Central to their course
design is a compilation (Table 3) of public speaking components. The assessment criteria,
both peer and teacher evaluations, employed in this study were aligned with these 14
elements.

Table 3. 14 Points for Public Speaking (from Yamashiro & Johnson, 1997)

15
Yamashiro and Johnson (1997) emphasized the significance of peer assessment in their
syllabus, asserting that students enhance their understanding of course objectives by
critically evaluating their peers. They argue that in addition to developing oral
communication skills, students refine their critical thinking abilities as they grasp the
importance of understanding assessment criteria to provide accurate feedback. Aligning
with the principles of assessment for learning, which emphasize student awareness of

16
their learning goals (Assessment Reform Group, 2002), Black et al. (2003) stress the
importance of well-structured frameworks to facilitate effective peer assessment and
encourage student reflection on performance. The "14 Points" outlined by Yamashiro and
Johnson served as focal points for students as they prepared for, practiced, and reflected
on their mid-term and final presentations.

These 14 Points were also integral to the course syllabus, which was distributed to
students on the first day of class. Subsequent classes revolved around exploring these
points and completing tasks related to each one. Additionally, students were instructed to
use the 14 Points to provide feedback to their peers during planning group sessions,
particularly after mini-presentations (practice sessions without computer slideshows).
These mini-presentations and subsequent group feedback sessions not only aided students
in preparing for their presentations but also served as opportunities to familiarize
themselves with the assessment criteria. It was hoped that consistent use of the 14 Points
during mini-presentations and class activities would help internalize the key assessment
criteria for students. Over time, students became well-versed in the various aspects of
public speaking emphasized in the course, including voice control, body language,
content, and effectiveness.

The evaluation framework employed for the public speaking course consisted of three
components, illustrated in Table 4 as follows:

Table 4: Assessment breakdown for the public speaking course

Assessor Percentage of final grade

1. Teacher 60% (30% per presentation)

2. Peers (6-8 students) 30% (15% per presentation)

3. Self 10% (5% per presentation)

The grading structure for the public speaking course allocated a significant portion of
the assessment weight to student-generated input, comprising nearly half (40% in total) of
the final course grade, while the remaining weight rested on teacher assessment. Peer

17
assessment contributed 30% towards students' final grades. Utilizing Yamashiro and
Johnson's (1997) 14-point framework, students utilized a peer rating (PR) sheet (refer to
Appendix 1) to evaluate and score their peers' presentations. Each student received ratings
from six to eight peers, depending on attendance during presentation sessions. To ensure
impartiality, students were barred from assessing presentations from their planning
groups, as these groups had already been exposed to mini-presentations on the same topic
and provided feedback. The PR sheets were collected post-midterm and final
presentations, and average scores were calculated from peer ratings. These sheets were
then distributed to presenters for peer feedback.

Moreover, student self-assessment was integrated into the assessment scheme,


accounting for 10% of the final grades. This involved two self-report evaluations. After
both the midterm and final presentations, students received a report sheet containing
questions regarding their preparation and delivery. Additionally, for the midterm report,
students were prompted to identify three areas they aimed to improve in their final
presentation.

18
Asian EFL Journal - Professional Teaching Articles. Vol. 33. Jamuary 2009

Following the final presentations, students were instructed to reflect on their progress in the
identified improvement areas from the midterm report. Their assessment involved
responding to questions about their performance, including whether they successfully
addressed those selected areas for enhancement. The teacher evaluated these responses
using a five-point scale (ranging from 5 for excellent to 1 for poor), considering the depth
and thoroughness of analysis in each report. Essentially, within a week post-presentation,
students received scores and feedback from both their peers (six to eight evaluations) and
the teacher.

Table 5: Peer Assessment Procedures

1. Prior to midterm and final presentation classes, peer rating sheets were duplicated and
prepared in sets of eight.

2. Students were divided into groups consisting of 12-14 members and assigned to
separate classrooms. They were responsible for setting up recording equipment and
recording each presenter.

3. Peer rating sheets were distributed to selected students who were not members of the
presenter's planning group. Typically, six to eight peer raters were assigned based on
attendance.

4 . During and after each presentation, students were instructed to complete the PR sheet
for each presenter. Presenters were also instructed to fill out the self-assessment report and
submit it in the following week's class.

5. PR sheets were collected at the end of the class and given to the teacher. Video
recordings were taken to the audio-visual center and made accessible to students who
wished to review their performance. These recordings were also utilized by the teacher to
evaluate presentations that were not viewed live.
Asian EFL Journal - Professional Teaching Articles. Vol. 33. Jamuary 2009

6. Before the next class, PR sheets were grouped together for each presenter, and copies
were made for instructor records. PR scores were recorded for each presenter, and an
average peer assessment (PA) score ranging from 5 (very good) to 1 (poor) was
calculated.

7. During the subsequent class, self-assessment reports from the previous week's
presenters were collected. Then, PR sheets for the previous week's presenters were
returned to students. Additionally, a teacher assessment sheet, utilizing the same peer
rating criteria, was provided to students during this class. It's important to note that, to
maintain objectivity, peer rating sheets for individual students were not reviewed by the
teacher before completing the teacher assessment for each presenter.

8. To gauge student perceptions of the peer assessment process, a student survey (refer to
Appendix 2) was distributed and completed by 53 students during the final class. The
survey consisted of three sections: 1) being a rater/being rated by peers, 2) the peer
assessment process, and 3) additional comments (open-ended). A four-point Likert scale
was employed on the survey to assess opinions: 1=agree, 2=tend to agree, 3=tend to
disagree, and 4=disagree. Options 2 and 3 provided students with the opportunity to
express some reservations regarding the level of agreement or disagreement for each item.
Asian EFL Journal - Professional Teaching Articles. Vol. 33. Jamuary 2009

CHAPTER 4. RESULTS

Administered during the final session of the public speaking course, the student survey,
completed by 40 students, aimed to capture student perspectives on the implemented peer
assessment (PA) framework. Concurrently, a sample copy of the peer rating sheet was
distributed to remind students of the PA criteria and score sheet structure as they responded
to the survey.

The survey comprised twelve items and a section for additional student comments.
Utilizing a four-point Likert scale, it sought to gauge student opinions. Tables 6 and 7
provide a summary of student responses to survey items. They present numbers and
percentages for each item, along with the combined total of agreement or disagreement
responses for each item. Part 1 of the survey consisted of eight items focusing on students'
perceptions of both being a rater and being rated by peers. The responses to these items are
detailed in Table 6.

Table 6. Survey Part 1- being a rater/ being rated by peers. (N=40)

Survey item 1. 2. 3. 4. Combined


Agre Tend to Tend to Disagree totals
e Agree Disagree

1. Assessment criteria
37 Agreement = 96%
on the sheet (e.g.,
(70 14 (26%) 1 (2%) 1 (2%)
pace) were clear and Disagreement = 4%
%)
easy to comprehend.

2. It was challenging to
13 Agreement = 62%
determine the overall
(25 20 (37%) 17 (32%) 3 (6%)
score (5, 4, 3, 2, 1) Disagreement = 38%
%)
for each presenter.

3. Relationships with
presenters
(friendships, etc.) 4 Agreement = 36%
15 (28%) 14 (26%) 20 (38%)
may have influenced (8%) Disagreement = 64%
the overall scores and
comments I provided.
Asian EFL Journal - Professional Teaching Articles. Vol. 33. Jamuary 2009
4. I felt at ease being a 14 Agreement = 66%
judge and scoring my (26 21 (40%) 17 (32%) 1 (2%)
peers' presentations. %) Disagreement = 34%

5. I felt at ease having


19 Agreement = 75%
my presentations
(36 21 (39%) 11 (21%) 2 (4%)
judged and scored by Disagreement = 25%
%)
my peers.

6. The overall scores


16 Agreement = 79%
given by my peers
(30 26 (49%) 10 (19%) 1 (2%)
were perceived as fair Disagreement = 21%
%)
and reasonable.

7. Evaluating other
Agreement =
students' 32
presentations assisted (60 19 (36%) 2 (4%) 0 (0%) 96%
me in planning and %) Disagreement = 4%
delivering my own.

8. Feedback and scores


from my first Agreement = 94%
30
presentation through
(56 20 (38%) 3 (6%) 0 (0%) Disagreement =
PA aided me in
%)
preparing for my 6%
second presentation.

Survey section 2 concentrated on the overall peer assessment process and the integration of
PA scores into final course grades. Table 7 illustrates student responses to this part.

Table 7. Survey Part 2: the peer assessment process (N=53)

2. 3.
1. 4. Combined
Survey item Tend to Tend to
Agree Disagree totals
Agree Disagree

9. Students should not


be engaged in assessing Agreement = 17%
peers; assessment should 0
9 (17%) 27 (51%) 17 (32%) Disagreement =
be solely the (0%)
responsibility of the 83%
teacher.
Asian EFL Journal - Professional Teaching Articles. Vol. 33. Jamuary 2009
2. 3.
1. 4. Combined
Survey item Tend to Tend to
Agree Disagree totals
Agree Disagree

10. Incorporating PA 14 Agreement = 85%


scores into students'
31 (59%) 7 (13%) 1 (2%)
final grades is a (26% Disagreement =
beneficial idea. ) 15%

11. Assigning a weight of 30% a. Too b. Fair c. Too low


to PA in determining the course's high 13
final grade is: (25%) 40 (75%) 0 (0%)

12. I suggest 28
incorporating PA into Agreement = 94%
21 (38%) 4 (6%) 0 (0%)
future Public Speaking (56% Disagreement = 6%
classes. )

In Part 3 of the survey, students were asked to provide additional written comments (in
English) about their experience with peer assessment (PA). Out of the 53 students surveyed,
36 offered further commentary in this section. These comments were divided into three
categories based on students' feelings toward the PA experience: positive (19=53%),
negative (10=28%), and mixed (7=19%). Below are examples from each category to
illustrate the variety of sentiments expressed.

Table 8: Sample of Students' Written Comments (verbatim)

+ Peer assessment (PA) was beneficial for enhancing my presentation quality. I believe
it's an effective system for enhancing our presentation abilities.

+ PA proves advantageous for both presenters and assessors, aiding in skill


enhancement for both parties. I advocate for its continuation in future public speaking
courses.

- I find peer assessment to be inequitable. Consequently, I suggest adjusting the ratio of


assessment between peers and instructors.

+/ While the peer assessment process contributes to presentation improvement,


- accurately evaluating presentations presents challenges. I'm unsure if my evaluations
of fellow students are accurate.
Asian EFL Journal - Professional Teaching Articles. Vol. 33. Jamuary 2009

CHAPTER 5. DISCUSSION

This discussion shifts the focus back to the core questions of the case study: how students
perceived the peer assessment (PA) process in the public speaking course and whether it
contributed to enhancing their skills as effective public speakers.

While acknowledging the limitations of survey research in providing only surface-level


insights, the responses from 53 students offer valuable perspectives on their experiences and
opinions regarding classroom dynamics and processes. It's important to note two factors
when interpreting these survey results. Firstly, the influence of teacher endorsement of peer
assessment throughout the semester could have shaped student attitudes, given the power
dynamics inherent in professional relationships within the classroom (Sadler, 1998).
Secondly, individual student perceptions of the benefits and drawbacks of peer assessment
are influenced by their own values, objectives, and capabilities, as highlighted by Fry (1990)
in his study of peer assessment.

The survey utilized a four-point Likert scale, offering students a binary choice between
agreement and disagreement with each statement, without a neutral option. Twelve
declarative statements, crafted based on insights from peer assessment literature, were
included in the survey.

Overall, the survey responses indicate that a majority of students had a positive reception to
the peer assessment format employed in the public speaking course, albeit with some
reservations. While most students found the experience beneficial, a minority expressed
dissatisfaction with the process. Moreover, the data suggests that peer assessment serves the
primary goal of promoting student learning, as emphasized in assessment for learning
principles (Black et al., 2003). Student perceptions of peer assessment, whether positive or
negative, align closely with the themes discussed in the peer assessment literature reviewed
earlier.

This discussion will be organized into two sections based on the survey format. The initial
section will focus on student perspectives on serving as peer assessors and being assessed
by their peers (items 1-8). The subsequent section will delve into broader themes regarding
Asian EFL Journal - Professional Teaching Articles. Vol. 33. Jamuary 2009
student perceptions of peer assessment (items 9-12). Student comments from the survey's
third section will be incorporated to provide additional insights into their perspectives.

5.1. Student perspectives regarding peer assessment and being evaluated

The 14 fundamental principles of public speaking outlined in Yamashiro and Johnson’s


(1997) syllabus, encompassing aspects such as voice control, body language, content, and
effectiveness, were incorporated into the peer rating sheet utilized throughout the course.
These principles served as the basis for weekly class activities and informal assessments
during group members’ mini-presentations. Student responses to survey item 1, which
inquired about the clarity of assessment criteria, indicated a high level of agreement, with
96% of students finding the criteria easy to understand. Notably, this item received the
highest agreement score of 70% among all survey items, suggesting that clarity of
assessment criteria is crucial for effective peer assessment. Consistent with existing
literature, it is widely acknowledged that a clear understanding of assessment criteria
enhances the validity of peer assessment. By consistently emphasizing the 14 points
throughout the course and integrating them into group activities, students developed a clear
understanding of the assessment criteria and honed their assessment skills. Embedding peer
assessment within regular course activities, as advocated by Liu and Carless (2006),
facilitates the development of students’ judgment abilities.

The peer rating score sheet, modeled closely after Yamashiro and Johnson’s (1997) system,
utilized a scoring continuum ranging from 1 (poor) to 5 (very good) for individual points
and overall assessment. However, one drawback of this system is the lack of clarity
regarding the distinctions between each score on the continuum. Consequently, survey item
2, which addressed the difficulty in assigning overall scores to presenters, elicited
agreement from a substantial proportion (62%) of respondents. The reasons for this
difficulty remain unclear and could stem from various factors such as the rating system
used, students’ limited experience with peer assessment, time constraints, or the inherent
complexity of assessment judgments.

As mentioned earlier, one potential drawback of using peer assessment (PA) is the potential
impact of student bias on scoring, stemming from the relationship between the assessor and
Asian EFL Journal - Professional Teaching Articles. Vol. 33. Jamuary 2009
the presenter. This concern was addressed in survey item 3, which inquired whether students
felt their scores might have been influenced by their relationships with presenters.
Approximately 36% of respondents (19 out of 53 students) agreed that such influence could
have played a role in their PA scoring. This finding aligns with Nilson’s (2003) observation
that students are often hesitant to critique their peers’ work, especially if it may result in
lowering their grades. This reluctance may be amplified when assessing friends, particularly
considering that peer scores contribute significantly to final grades, as was the case in this
public speaking course.

Several factors specific to this course context are worth considering. Firstly, the students
had already spent a year together in a prerequisite course, fostering familiarity and
camaraderie among them. Additionally, the methodology employed in peer assessment,
where assessors could directly see the presenters they were evaluating, eliminated
anonymity despite not writing their names on the peer rater sheet. Furthermore, presenters
were aware of the identities of the 6-8 students assessing them. These factors may have
influenced the scoring process, leading students, as one survey respondent noted, to be
"modest in giving scores."

Given that these student-generated scores factored into final grades, ensuring reliability
becomes crucial. Do the responses to survey item 3 compromise the reliability of peer
scores and, subsequently, students’ final grades? It is a possibility. However, the majority of
students (64%) disagreed that their ratings were influenced by their relationships with
presenters. Moreover, prior to the presentation classes, students were reminded of the
importance of honesty and fairness in their peer ratings. The issue of rater bias may also be
linked to survey item 6, which pertains to the perceived fairness of scores given by peers.

Survey item 6 sought students' opinions on whether the overall scores given by their peers
were fair and reasonable. Prior to the survey, students were briefed that "reasonable"
referred to a sensible judgment based on presentation content and delivery quality. Survey
findings revealed that 79% of students (42 out of 53) believed the peer ratings to be fair and
reasonable. Mowl and Pain's (1995) study on peer assessment of essay writing supports this
notion, suggesting that students are generally capable and conscientious assessors when
adequately prepared and reassured about the exercise's value. The majority's satisfaction
with the fairness and reasonableness of peer scores indicates their capability and
Asian EFL Journal - Professional Teaching Articles. Vol. 33. Jamuary 2009
conscientiousness in assessing their classmates' presentations.

However, 19% of students (10 out of 53) tended to disagree, expressing feelings that peer
scores for their presentations were somewhat unfair and unreasonable. Some students
attributed this perception to varying levels of strictness or leniency among assessors.
Additionally, individual differences in perception and evaluation skills may have influenced
the perceived fairness of peer scores. Despite these variations, almost four out of five
students surveyed were content with the fairness and reasonableness of their peer ratings.

Items 4 and 5 of the survey delved into students' comfort levels in both assessing their peers
and being assessed by them. Regarding item 4, which addressed students' comfort in
judging their peers' presentations, 34% of respondents (18 out of 53) indicated discomfort.
This discomfort could stem from a lack of confidence or experience in assessing peers, fear
of hurting or being hurt by classmates, or discomfort with power dynamics. Similarly, item
5, which explored students' comfort in having their presentations judged by peers, revealed
that 75% of students (40 out of 53) were comfortable with assessment by classmates.

One student expressed, "I think it is OK that other students judge me. In general, I only
receive feedback from the teacher. Therefore, comments from other students help me to
improve my presentations." However, a quarter of the students surveyed expressed
discomfort with peer assessment. Such feelings may have arisen due to concerns about peer
objectivity, their capability to assess accurately, and the dynamics of relationships between
presenters and assessors. Interestingly, one student noted that being evaluated by peers
might heighten presenters' stress and feelings of unease; they mentioned, "Sometimes
students cannot feel comfortable while presenting because they know many people are
assessing them."

The evaluative component of the peer assessment (PA) framework utilized in this study
may have heightened the anxiety of presenters, as perceptively noted earlier. Liu and
Carless (2006) highlight that in such circumstances, "the audience for the learners' work is
no longer confined to just the teacher but extends to their peers as well. Learners might feel
pressured, exposed to risk, or perceive a competitive atmosphere that could result from peer
assessments" (p. 287).

The discomfort evident in items 4 and 5 indicates varying degrees of unease among
Asian EFL Journal - Professional Teaching Articles. Vol. 33. Jamuary 2009
students regarding peer assessment, both when evaluating their peers and being evaluated by
them. Prior research on peer assessment has demonstrated that students often find it
uncomfortable to have some level of authority over their peers or to be subject to their
evaluation (Falchikov, 2000). Comparing responses to items 5 and 6 on the survey, it seems
that students were less comfortable acting as peer assessors and assessing their classmates
(with 34% expressing disagreement) than they were when being evaluated by their peers
(with 25% expressing disagreement). The process of scoring their peers' performances
appeared to generate greater discomfort among students, potentially exacerbated by the
evaluative nature of peer scores. Students might have been hesitant to potentially impact a
classmate's grade through their assessments. Consequently, the heightened stress
experienced by students may have stemmed from participating in peer assessment rather
than receiving it.

According to the theory of assessment for learning, students enhance their learning when
they take on the roles of teachers and evaluators of others. For formative assessment to be
effective, it should lead to further learning (Black et al., 2003; Stobart, 2006). This leads us
to address the second question explored in this case study: Did students perceive the peer
assessment (PA) process implemented as beneficial for improving their public speaking
skills? We can address this question by examining student responses to survey items 7
("Assessing other students' presentations helped me plan and deliver my own") and 8 ("PA
scores and comments from my first presentation helped me prepare my second
presentation").

For item 7, 60% of students agreed that evaluating their peers' presentations aided them
in planning and delivering their own presentations. This percentage represented the second
highest agreement score among all 12 items on the survey. An additional 36% tended to
agree, with only two students (out of 53) tending to disagree that PA was beneficial in this
regard. Student comments provided in response to part 3 of the survey echo the positive
sentiments towards this item. For instance, one student remarked: "PA is beneficial for both
the presenter and the evaluator. It helps both parties improve their skills." This perspective
aligns with the idea that evaluating the quality of peers' work using assessment criteria is
likely to enhance the evaluator's own work quality as well (Gibbs, 1999). Similarly,
providing feedback on peers' work helps students develop a level of objectivity regarding
assessment criteria, which can then be applied to their own work (Nicol & MacFarlane-
Asian EFL Journal - Professional Teaching Articles. Vol. 33. Jamuary 2009
Dick, 2006). Rust, Price, and O'Donovan (2003) demonstrated that involving students in
grading resulted in a significant enhancement in their grades in similar assignments. With
96% of survey respondents agreeing with item 7, it's evident that students perceived being a
peer assessor as beneficial for planning and delivering their own presentations.

The tension between formative and summative aspects of this peer assessment (PA)
framework has been acknowledged, with the expectation that the summative utilization of
PA scores would not undermine the potential for assessment for learning within the
implemented processes. According to Black et al. (2003), it is crucial to prioritize the
formative use, as a new practice might enhance the collection of more insightful information
about students' thought processes. However, if this practice fails to utilize or demonstrate to
students how to utilize that information to enhance their learning, it misses the fundamental
purpose (p. 109).
Asian EFL Journal - Professional Teaching Articles. Vol. 33. Jamuary 2009

The primary focus remained on utilizing this peer assessment (PA) tool formatively,
which effectively encouraged additional learning as students assessed their peers. If the
primary measure of validity for Assessment for Learning (AfL) is its ability to effectively
support student learning (Gardner, 2006), then student feedback suggests that this PA
process successfully achieved this vital objective.

Survey item 8 inquires whether feedback from peers (scores and comments) on their
initial presentation aided students in preparing for their subsequent one. Over half (56%)
agreed that this was the case, with an additional 38% tending to agree that peer feedback
proved beneficial in this regard. One student explicitly stated: "The comments provided
during peer assessment are more valuable and impactful in enhancing presentations. I
believe that both the scores and peer comments are equally significant." One of the primary
rationales for incorporating peer assessment is its capacity to provide students with a more
extensive feedback mechanism compared to sole teacher evaluation, offering "timelier and
more abundant feedback" (Topping, 1998, p. 255).

The summative aspect of this peer assessment (PA) process is evident due to its
incorporation of scores for final grades. However, the formative purpose and the beneficial
effects on promoting student learning, rather than solely measuring it, are also robust
features of this process. Stobart (2006) cautions educators to be wary of assessments that are
intended to be formative but do not effectively stimulate further learning in practice.

Student responses to items 7 and 8 in the survey contribute to a positive conclusion


regarding the second key inquiry of this study: that for some students, this PA framework
served as 'assessment for learning' and indeed facilitated the development of effective public
speaking skills. According to Stiggins (2007), when students engage in thoughtful analysis
of high-quality work, they improve as performers, gain a better understanding of the
shortcomings in their own work, take ownership of improvement, and become aware of
their own progress. This case study provides evidence of students engaging in reflective
analysis of both their peers' performances and their own work, consequently enhancing their
own performance through this process.
Asian EFL Journal - Professional Teaching Articles. Vol. 33. Jamuary 2009

Part 2: Perspectives on the Peer Assessment (PA) Procedure

The four items in Part Two of the student survey pertain to broader considerations
regarding student participation in Peer Assessment (PA) and the utilization of peer
evaluations for summative grading purposes. Let's first address the issue of student
perspectives on the incorporation of PA scores into final grades (as addressed in survey
items 10 and 11).

As mentioned, the question of whether PA should constitute a significant portion of


students' final grades is a topic of debate within the PA literature. Survey item 10 seeks to
gather student opinions on this matter, asking: "Incorporating PA scores into students' final
grades is a beneficial approach." Overall, students expressed a preference for this approach,
with 85% in agreement (45 out of 53 respondents). However, a majority of these students,
comprising 59%, selected the 'tend to agree' response, indicating some level of reservation
regarding the use of PA scores for summative purposes. It's important to note that at the
time of survey completion, students had already undergone two cycles of PA, and their
responses reflect their awareness and experience with some of the potential challenges
associated with PA, as previously noted.

Survey item 11 asked students to provide their perspective on the allocation of 30% of
their final grade to Peer Assessment (PA), determining whether they considered this
percentage too high, fair, or too low. The majority of students, comprising 75% (40 out of
53), regarded this percentage as fair, while 25% deemed it too high. Some students
suggested that a PA contribution to final grades in the range of 10-20% would have been
more appropriate, as evidenced by the following verbatim student comment: "I appreciate
the PA process. It proved effective for my presentation after I evaluated my peers. However,
I find 30% a bit excessive. I believe 15% - 20% would be suitable." In hindsight, I tend to
agree with this perspective, and if conducting a similar experience with future classes, I
would consider keeping the PA contribution within this percentile range, taking into account
some of the reliability issues identified in this case study.

Survey items 9 and 12 aimed to gauge student perspectives regarding their involvement
in the assessment process. Item 9 presented a negative viewpoint of PA for students to react
Asian EFL Journal - Professional Teaching Articles. Vol. 33. Jamuary 2009
to ("Students should not be involved in assessing peers; assessment should be solely the
teachers' responsibility."). A total of 83% of students disagreed with this statement.
Responses to this item indicated students' recognition of the potential benefits of their
participation in the assessment process, as opposed to the traditional teacher-only
assessment format. Similar sentiments were expressed by other students in the survey,
exemplified by the following student viewpoint: "It's a valuable way to understand how my
peers or other students perceive my presentation. Additionally, I can receive a plethora of
advice or opinions not only from teachers but also from fellow students.

Around 17% (9 out of 53) of respondents tended to agree that assessment should solely
be the responsibility of the teacher. Although the survey does not delve into the reasons
behind this viewpoint, it is conceivable that some of the previously mentioned factors may
contribute to it. Additionally, cultural considerations might also play a role in certain
students' dissatisfaction with peer assessment. Stobart (1996) discusses how the culture of
schooling can influence the effectiveness of formative assessment. Entrenched beliefs about
teaching and learning, as well as deeply ingrained assessment practices, may either hinder or
support formative assessment efforts. In cultures where didactic teaching methods prevail,
transitions towards peer and self-assessment by learners may require significant changes to
the classroom environment, which could be met with resistance (p. 137). Vietnam
exemplifies a culture characterized by a didactic teaching model, where assessment has
traditionally been dominated by teacher-only practices. Paradoxically, this circumstance
may contribute to some students' positive attitudes towards peer assessment. It offers a
refreshing departure from passive roles and empowers students to become active
participants and decision-makers in the assessment process, facilitating learning from the
experience.

Lastly, survey item 12 inquired whether students would recommend using Peer
Assessment (PA) in future public speaking classes. An overwhelming 96% expressed
agreement, yet 38% of these respondents opted for the 'tend to agree' response. Despite
reservations about certain aspects of the PA process, these responses collectively indicate
that the public speaking course was largely perceived as a positive assessment experience by
the students. They believe that future classes should also have similar opportunities to
engage with and learn from peer assessment.
Asian EFL Journal - Professional Teaching Articles. Vol. 33. Jamuary 2009
Asian EFL Journal - Professional Teaching Articles. Vol. 33. Jamuary 2009

CONCLUSION

While students can benefit from being evaluated by their peers, peer assessment is primarily
about learning rather than assessment, with the peer making the judgments being the key
factor (Liu & Carless, 2006). Langan et al. 's (2005) study on peer assessment concluded
that the "benefits of learner inclusion and active learning dimensions merit [peer
assessment] inclusion in future courses" (p. 31). Echoing this sentiment, the perspectives of
students on using peer assessment in public speaking classes lead to a similar conclusion in
this case study. One student provided the following comment:

"I wasn’t initially comfortable with peer assessment, but I believe it's a great system.
It's a new idea for me, and the scores, advice, and comments from my peers were very
helpful and accurate. So, I encourage you to continue with this system."

The student survey yielded a variety of perspectives and feedback regarding peer
assessment and its implementation in the Effective Public Speaking course. Overall, the
student feedback appears to be largely positive, with many expressing satisfaction with the
inclusion of such a participatory assessment model in the course. Importantly, the survey
responses indicate that the peer assessment process indeed facilitates student learning in
constructing, delivering, and evaluating effective presentations.

Despite potential challenges that may arise, the pedagogical and practical rationale for
integrating peer assessment into course frameworks remains robust and clear, particularly in
performance-based assessments like those discussed in this case study. With careful
attention to design and implementation, the learning derived from assessing peers can
outweigh any difficulties encountered. Educators are encouraged to explore and share their
experiences with incorporating peer assessment for learning in their courses, taking into
account the perspectives of the students involved.
Asian EFL Journal - Professional Teaching Articles. Vol. 33. Jamuary 2009
REFERENCES

Basturk, R. (2008). Applying the many-facet Rasch model to evaluate PowerPoint


presentation performance in higher education. Assessment & Evaluation in Higher
Education, 2008, First Article, 1-14.

Black, P., Harrison, C., Lee, C., Marshall, B. & Wiliam, D. (2003). Assessment for
Learning: Putting it into practice. New York: Open University Press.

Black, P., & Wiliam, D. (2006). Developing a theory of formative assessment. In J.

Gardner (Ed.), Assessment and Learning (pp. 9-26). London: Sage Publications. Bloxham,
S. & West, A. (2004). Understanding the rules of the game: marking peer assessment
as a medium for developing students’ conceptions of assessment. Assessment &
Evaluation in Higher Education, 29(60), 721-723.

Brookhart, S. (2003). Developing measurement theory for classroom assessment purposes


and uses. Language assessment: Principles and Classroom Practices. New York:
Pearson Education.

Falchikov, N. (2003). Involving students in assessment. Psychology Learning and


Teaching, 3(2), 102-108.

Falchikov, N., & Goldfinch, J. (2000). Student peer assessment in higher education: a meta-
analysis. Review of Educational Research, 70, 287-322.

Fry, S.A. (1990). Implementation and evaluation of peer marking in higher education.

Assessment & Evaluation in Higher Education, 15, 177-189.

Gardner, J. (Ed.) (2006). Assessment and learning. London: Sage Publications.

Gay, L. R. & Airasian, P. (2003). Educational research: Competencies for analysis and
application (7th Edition). New Jersey: Prentice Hall.

Gibbs, G. (1999). Using assessment strategically to change the way students learn. In S.
Brown & A. Glasner (Eds.) Assessment matters in higher education: Choosing and
using diverse approaches (pp. 41-53). Buckingham & Philadelphia: SRHE & Open
University Press

Harlen, W. (2006). On the relationship between assessment for formative and summative
purposes. In J. Gardner (Ed.), Assessment and learning (pp. 61-81). London: Sage
Asian EFL Journal - Professional Teaching Articles. Vol. 33. Jamuary 2009
Publications.

Hanrahan, S. & Issacs, G. (2001). Assessing self-and peer-assessment: the students’ views.
Higher Education Research and Development, 20(1), 53-70.

James, M. & Pedder, D. (2006). Professional learning as a condition for assessment for
learning. In J. Gardner (Ed.), Assessment and learning (pp. 27-44). London: Sage
Publications.

Kennedy, K., Kin Sang, J., Wai-ming, F. & Kwan Fok, P. (2006). Assessment for
productive learning: forms of assessment and their potential for enhancing learning.
Paper presented at the 32nd Annual Conference of the International Association for
Educational Assessment, Singapore, 2006.

Langan, M. & 10 Associates (2005). Peer assessment of oral presentations: effects of


student gender, university affiliation and participation in the development of
assessment criteria. Assessment & Evaluation in Higher Education, 30(1), 21-34.

Liu, N. F. & Carless, D. (2006) Peer feedback: the learning element of peer assessment,
Teaching in Higher Education, 11(3), 279-290.

Magin, D. & Helmore, P. (2001). Peer and teacher assessments of oral presentation skills:
how reliable are they? Studies in Higher Education, 26(3), 288-297.

McLaughlin, P. & Simpson, N. (2004). Peer assessment in first year university: how the
students feel. Studies in Educational Evaluation, 30, 135-149.

McTighe, J. & O’Connor, K. (2005). Seven practices for effective teaching. Educational
Leadership, 63(3), 10-17.

Mowl, G. & Pain, R. (1995). Using self and peer assessment to improve students; essay
writing; a case study from Geography. Innovations in Education and Training
International, 32, 324-335.

Nicol, D.J. & Macfarlane-Dick, D. (2006). Formative assessment and self-regulated


learning: A model and seven principles of good feedback practice. Studies in Higher
Education, 31(2), 199-216.

Nigel, K. & Pope, N. (2005). The impact of stress in self- and peer assessment. Assessment
& Evaluation in Higher Education, 30(1), 51-63.
Asian EFL Journal - Professional Teaching Articles. Vol. 33. Jamuary 2009
Nilson, L. (2003). Improving student feedback. College Teaching, 51(1), 34-39.

Sadler, D.R. (1998). Formative assessment: Revisiting the territory. Assessment in


Education, 5(1), 77-84.

Stefani, L. (1998). Assessment in partnership with learners. Assessment & Evaluation in


Higher Education, 23(4), 339-350.

Stobart, G. (2006). The validity of formative assessment. In J. Gardner (Ed.), Assessment


and learning (pp. 133-146). London: Sage Publications.

Struyven, K., Dochy, F. & Janssens, S. (2005). Students’ perceptions about evaluation and
assessment in higher education: a review. Assessment & Evaluation in Higher
Education, 30(4), 325-341.

Stiggins, R. (1987). Design and development of performance assessment. Educational


Measurement: Issues and Practice, 6(3), 33-42.

Stiggins, R. (2007). Conquering the formative assessment frontier. In J. McMillan (Ed.),


Formative classroom assessment: Theory into practice. New York: Teachers College
Press, Columbia University.

Stiggins, R. (2008). Assessment manifesto. Assessment Training Institute. Peer-assessment


between students in colleges and universities. Review of Educational Research 68(3),
249-276.

Vu, T. & Alba, G. (2007). Students’ experience of peer assessment in a professional course.
Assessment & Evaluation in Higher Education, 32(5), 541-556.

Weaver, W. & Cottrell, H.W. (1986). Peer evaluation: A case study. Innovative Higher
Education, 11, 25-39.

Wen, M. & Tsai, C. (2006). University students’ perceptions of and attitudes toward
(online) peer assessment. Higher Education, 51, 27-44.

Yamashiro, A. & Johnson, J. (1997). Public Speaking in EFL: elements for course design.
The Language Teacher. Higher Education, 45, 477501.
APPENDICES

Appendix 1: Presentation peer rating sheet (based on Yamashiro & Johnson, 1997)

Public Speaking Class Peer Rating Sheet

Speakers Name: Presentation topic:

Score scale: 5 (very good) 4 (good) 3 (average) 2 (weak) 1 (poor)

Circle a number for each category, and then consider the numbers you chose to decide an
overall score for the presentation.

1. Voice Control

1.1. Volume (loudness/softness) 5 4 3 2 1

1.2. Speed (speech rate; fast/slow) 5 4 3 2 1

1.3. Tone (intonation patterns, pauses) 5 4 3 2 1

1.4. Pronunciation (clear articulation) 5 4 3 2 1

2. Body Language

2.1. Posture (upright, relaxed) 5 4 3 2 1

2.2. Eye contact 5 4 3 2 1

2.3. Gestures (appropriate, not distracting) 5 4 3 2 1

3. Contents of Presentation

3.1. Introduction (captures attention, outlines 5 4 3 2 1


main points)

3.2. Body (clear main ideas, smooth 5 4 3 2 1


transitions)

3.3. Conclusion (summarizes main points, 5 4 3 2 1


provides closure)

4. Effectiveness

4.1. Topic selection (engaging for audience) 5 4 3 2 1

4.2. Language proficiency (clear, 5 4 3 2 1


grammatically correct)

4.3. Vocabulary (precise and appropriate) 5 4 3 2 1


4.4. Objective (informative, educates audience 5 4 3 2 1
on topic)

5. Visuals

Effective utilization of slides to enhance 5 4 3 2 1


presentation

Overall Score S 4 3 2 1

Comments (optional, in English):


Appendix 2: Student survey: Peer assessment of presentations

(Note: A sample copy of the peer assessment (PA) rating sheet was provided to students for
reference while responding to this survey.

During the Public Speaking course, in addition to preparing and delivering your own
presentations, you were also tasked with evaluating the presentations of your peers. I am
seeking your perspective on this peer assessment process (PA). Please review the sample
peer-rating sheet once more and consider the following statements. Respond to each
statement by selecting the most appropriate number that reflects your opinion. Thank you
for your input.)

Choose one of the following numbers and write it after each statement:

1 = agree 2 = tend to agree 3 = tend to disagree 4 = disagree (Note: for item number 11
below, please circle the letter.)

Part 1: Being a rater/being rated by my peers

1. Assessment items on the sheet (e.g. pace, language use) were easy to understand.
2. It was difficult to decide the overall score (5, 4, 3, 2, 1) for each presenter.
3. Relationships with presenters (friendships, etc.) may have influenced the overall
scores and comments I gave.
4. I was comfortable being a judge of my peers’ presentations and giving a score.
5. I was comfortable having my presentations judged and scored by my peers.
6. The overall scores my peers gave me were fair and reasonable.
7. Assessing other students’ presentations helped me plan and deliver my own
presentations.
8. PA scores and comments from my first presentation helped me prepare my second
presentation.

Part 2: The Peer assessment process 9. Students should not be involved with assessing
their peers. Assessment should be the sole responsibility of the teacher.

10. Making PA scores a part of student final grades for the course is a good idea.
11. Making PA worth 30% of the final course grade is: a) too high b) a fair amount c) too
low
12. I recommend using PA in future Public Speaking classes.

Part 3: Do you have any other comments on the peer assessment process? (in English)

You might also like