Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 22

Designing android-based

e-rubrics for peer assessment  

Paper Presented at International Virtual TESOL Conference 2020,


13-14 June 2020

Hilda Cahyani
Ardian Wahyu Setiawan
Nadia Hanayeen
Politeknik Negeri Malang, Indonesia
Background
Students’ need to involve
peer assessment has
peer assessment for gaining
beneficial effects on
some constructive feedback
students’ learning
of their work

In translation and writing


course, students need to be Rubric is one of effective
guided to do self-assessment instruments for assessment;
in order to better know it provides consistency and
themselves, and to finally fairness
improve their skill

Android was favored as the


Therefore, we developed an
assessment platform due to
android-based e-rubric for
its customizability and
peer assessment
affordability
Research problems

How is e-rubric based-android designed to


facilitate peer-assessment?

How is the e-rubric implemented in peer and


self-assessment activities?

What is the result of the survey in using the e-


rubric and what did the students say to improve
the rubric?
Literature review

A rubric is a tool of assessment listing some criteria


What is
along with the level of quality in each criterion
it? (Ragupathi & Lee 2020)

offers an objective-and-consistent evaluation to


Function minimize the difference in grades even when
multiple raters are involved (Huba & Freed 2000)

Checklists use a simple yes/no; rating scales use a Likert-


Contents scale decision while the rubrics actually describe the
performance for each criterion (Brookhart 2018)
Types of rubrics (Brookhart & Nitko 2008)

generic vs.
analytic vs.
task-
holistic
specific
Two remarkable results after reviewing 75 studies (Jonsson & Svingby
2007)

• Rubrics can improve


performance assessment, esp.
for analytical rubrics

• Rubrics can promote learning


and/or improve instruction
Review studies on the impact of e-rubrics
assessment
E-rubrics E-rubric for peer E-rubric for self- measuring soft skills

Peer-assessment

Self-assessment

Soft skills
Formative and summative
assessment for both assessment (Pérez- assessment in Art subjects- both
formative and Galán, Cebrián- (Steffens 2014) related to peer and
summative Robles & Rueda- self-evaluation
assessment (Jonsson Galiano, 2015; (Haro-Garcia et al.
& Svingby 2007; Raposo-Rivas & de 2018)
Reddy & Andrade la Serna 2019).
2010; Ahankari, &
Jadhav 2016; Otey,
et al. 2019).
Has benefits as
e l l opposed to
p b rubrics, with the
a m obvious
s (C advantages of
i c
br
digital tools,
r u further facilitating

t al feedback and
i gi reflection on
D 05 learning,
20
Students experienced
greater interactivity,
personalisation versus
collaboration and the
processing of big data
have appeared more
recently with the
emerging
technologies)
Methods
DBR (Design-Based Research) generates outputs deriving
from the integration of theories and products based on
particular needs

Two classes of English Department of 54 students of


second and fourth semester of translation and writing 1
courses participated in this research

Using questionnaires, semi-structured interview, focus


group discussion, test, and journals.

Interviews and focus-group discussion were conducted to


obtain data for needs analysis and users’ feedback,
whereas the online survey was done to measure the
students’ satisfaction
Result 1: 8 stages of e-rubric design

2. development 4. creating the


of rubrics e-rubric app

1. needs 3. expert
analysis validation
Result 1: 8 stages of e-rubric design

5. Implementation
Result 2: e-rubric implementation

1. Training of
using the rubric
6. Final evaluation (one meeting)
and report made
by the teacher

2. Implementing peer
assessment using the
e-rubric (two topics)

5. The results of peer-


and-self assessment
were sent to the
teacher

3. Conference
(students negotiating
meaning of the rubric
evaluation)
4. Self assessment
(reflection) of
each student
Students’ experience in peer-and-self assessment
Confused to fit the case they found in their
partner’s writing with the criteria in the rubric

Got a low score from their partner


while in fact their work was not bad

Difficult to choose the right scoring


range (such as between 10 and 15)
Results of satisfaction survey
Item Characteristics Strongly disagree agree Strongly
disagree agree
1 usefulness of the app - 4% 51% 45%
(efficiency)
2 ease of use (perspicuity) 2% 9% 36% 53%
3 the app's look - 13% 53% 34%
(attractiveness)
4 interaction/engagement 3% 17% 64% 16%
between the user and
the rubric
(dependability)
5 excitement/motivation in - 2% 53% 38%
using the app
(stimulation)
6 Creativity of the app - 6% 34% 60%
(novelty)
Feedback for project improvement
No Feedback from students Yes No
1 Adding a feature of profile photo X
2 Giving an option to view examples in each criteria (can be correct or wrong X
examples)
3 Adding text box where the users can give some descriptive evaluation ✔

4 Adding the feature about (brief description about the research) ✔

5 Adding confirm button (for saving the file) ✔


6 Simple range of scoring for each criteria - uses one/two words/one phrase X

7 Adding the feature of upload text enabling the user to both see the rubric and X
the text simultaneously in one screen
8 Placing the navigation button of “home” or “history” to the most reachable X
position to our thumb
9 Giving two language options: English & Bahasa Indonesia to help those who do X
not really understand the English rubrics
Discussion

Students did not


The design takes into encounter much
account theories on The training session technical problem,
the following The e-rubric has
for the students in but they felt
The design of the aspects: peer facilitated peer and
using the rubric overwhelmed to
app is an iterative assessment; design self-assessment
seems to be uneasy work using multiple
process (Huba & and use of rubrics for (Pérez-Galán,
to do online using devices. This should
Freed 2000), assessment; Cebrián-Robles &
video conference be related to the use
requiring long stages students’ learning Rueda-Galiano,
the online mode is e-portfolio to
and collaborative needs (including 2015; Raposo-Rivas
not the only thing replace traditional
work with IT expert their learning & de la Serna 2019;
that made the methodologies of
objectives); Steffens 2014).
training uneasy, the learning (Vanesa-
practicality and user- María, Gámiz-
friendliness of limited time too
Sánchez. 2017)
android apps
the developed e-rubric app
the android platform offers
is useful for collaborative
a handy-and-engaging
works to determine each
experience for completing
other’s strengths and
the assessment
weaknesses

Teachers need to pay


Conclusion attention to the
heterogeneity of the
students. This is for
facilitating equal
contribution among them.
Implication
s
Revision of rubrics is important to improve the criteria,
students’ quality of work, and as students reflect on the
difficulty in assessing and scoring their partners’ work.

This e-rubric can be shared and articulated feedback with


colleagues to lead to rubric co-creation and development
through collaboration. This may also lead to the development
of e-rubrics in the English department

Teachers tend to associate a set of rubrics to one assignment,


but it is certainly possible to learn a great deal about one’s
teaching methods and student learning across multiple
assignments, and across multiple courses.
Recommendation
Future study needs to be carried out to find a method or app to help students do
peer and self assessment for receptive skills (As our app is useful for productive
skills). Receptive skills are tricky to assess formatively since students do not
produce tangible outcomes, so it will be challenging to find a method/app to help
them do peer assessment.

This app needs to More subjects


be developed in
need to be
such a way that
all platforms can
utilized in this
use it (android, android-based
iOS, web-based). e-rubric.
References
• Ahankari, S. S., & Jadhav, A. A. (2016, December). e-Rubrics: A formative as well as summative assessment tool for
assessment of course and program outcomes. In 2016 IEEE Eighth International Conference on Technology for
Education (T4E) (pp. 246-247). IEEE.
• Brookhart, S.M. 2018. Appropriate Criteria: Key to Effective Rubrics, Frontiers in Education 3, no. 22.
https://doi.org/10.3389/feduc.2018.00022.
• Campbell, A. 2005. Application of ICT and Rubrics to the Assessment Process Where Professional Judgement is
Involved: Features of an e-Marking Tool. Assessment & Evaluation in Higher Education, 30(5), pp. 529–537.
https://doi.org/10.1080/02602930500187055.
• Gámiz-Sánchez, V. M. (2017). ICT-based active methodologies. Procedia Soc Behav Sci, 237, 606-12.
• Haro-Garcia, N. D., Comas-Lopez, M., Hincz, K. P., Mazalu, M., & Sacha, G. M. 2018. Soft Skills assessment in Art and
Globalization. In Proceedings of the Sixth International Conference on Technological Ecosystems for Enhancing
Multiculturality (pp. 199-204).
• Huba, M. E., & Freed, J. E. (2000). Learner-centered assessment on college campuses: Shifting the focus from
teaching to learning. Allyn & Bacon, 160 Gould St., Needham Heights, MA 02494.
• Jonsson, and Svingby. 2007. The Use of Scoring Rubrics: Reliability, Validity and Educational Consequences,
Educational Research Review 2, no. 2: 130– 144.
References
• Ragupathi, K., & Lee, A. (2020). Beyond Fairness and Consistency in Grading: The Role of Rubrics in Higher
Education. In Diversity and Inclusion in Global Higher Education (pp. 73-95). Palgrave Macmillan,
Singapore.
• Raposo-Rivas, M., & de la Serna, M. C. 2019. Technology to Improve the Assessment of Learning. Digital
Education Review, (35).
• Reddy, Y. & Andrade, H. 2010. A Review of Rubric Use in Higher Education. Assessment & Evaluation in
Higher Education, 35 (4), pp. 435–448. http://dx.doi.org/10.1080/02602930902862859.
• Sadler,D.R. 2009. Transforming Holistic Assessment and Grading into a Vehicle for Complex Learning, in
Assessment, Learning and Judgement in Higher Education, edited by Gordon Joughin (Dordrecht:
Springer), 1–19.
• Steffens, K. (2014). E-rubrics to facilitate self-regulated learning. REDU. Revista de Docencia
Universitaria, 12(1), 11-12.
• Otey, J., Agost, M. J., Contero, M., & Camba, J. D. (2019). Teachers as designers of formative e-rubrics: a
case study on the introduction and validation of go/no-go criteria. Universal Access in the Information
Society, 18(3), 675-688.
Thank you
#welovecollaboration

Hilda Cahyani (hilda.cahyani@polinema.ac.id)


Ardian Setiawan (ardian@polinema.ac.id)
Nadia Hanayeen (nhanayeen@polinema.ac.id)

You might also like