EDULEARN 2021 Paper Formative Learning Assessment Covid 19 Montes-Iturrizaga & Franco-Chalco

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

13th International Conference on Education and New Learning Technologies. 5-6 July, 2021.

Pages:
8327-8331 / ISBN: 978-84-09-31267-2 / ISSN: 2340-1117 / doi: 10.21125/edulearn.2021.1679
Indexed in Web of Science

FORMATIVE LEARNING ASSESSMENT IN THE CONTEXT OF COVID


19 PANDEMIC: A RESEARCH FROM THE PERCEPTIONS OF
UNIVERSITY STUDENTS IN PERU

Iván Montes-Iturrizaga1, Eduardo Franco-Chalco2


1
Universidad María Auxiliadora (PERÚ)
2
Pontificia Universidad Católica del Perú (PERÚ)

Abstract
This study explores the perceptions and expectations regarding formative assessment in 126
university students (105 women and 21 men) at the end of their first semester (virtual education).
These students belonged to the faculties of health sciences and administration of a private university
(of a social nature due to its low costs) located in a district with high poverty rates in the city of Lima
(Peru). It is worth mentioning that this university regularly provides face-to-face training and, for
reasons of the Covid 19 pandemic, had to switch exceptionally to virtual teaching. The formative
assessment is a permanent process interested in knowing the strengths and weaknesses to provide
feedback on teaching and learning; in the search for all students to learn. Thus, given the need to
make the best decisions to favor student learning, teachers must support their evaluative practices in
performance tests and opposition to multiple-choice exams. A virtual and anonymous questionnaire
was elaborated and provided to the students who participated in the study; it was answered voluntarily
under informed consent. The results show that students had to answer the most answer selection
tests (85.7%), followed by developmental or essay exams (69.8%). Also, students mentioned
answering cases to a lesser extent (46.8%). On the other hand, 65.9% of the subjects stated that they
had to respond to tests characterized as a mixture of memorizing and analyzing ("thinking") tests. In
this research, we found a paradox concerning the tests: the students prefer multiple-choice exams
(57.1%) and, at the same time, they consider that these are not the most appropriate for their training,
but instead essay exams (65.1%). Finally, the analyses indicated to us the existence of low to medium
feedback from teachers. A final reflection is offered on the need to safeguard the practice of formative
assessment, which the abuse of multiple-choice exams could threaten these pandemic times.
Keywords: Formative assessment, feedback, developmental test, multiple-choice test.

1 INTRODUCTION
In the current times of forced virtual education in universities due to the Covid 19 pandemic, there are
a series of challenges to maintain the progress made an informative evaluation and under the
influence of the competency-based approach. [1], [2]. In this way, we will likely have to deploy actions
to give pedagogical sense to virtual platforms, given that there will likely be an abuse of multiple-
choice tests with automated grading. It should be noted that this type of test - due to their low cognitive
demand and lack of external validity - would not enter into the evaluative repertoires within a training
perspective interested in students developing relevant competencies or performances [3], [4], [5].
Likewise, dishonest acts and copying are also latent risks if measures are not taken that are much
more comprehensive informative evaluation [6].
Specifically, we conceive formative evaluation as a reflective, critical and critical process [7] and
ongoing assessment of strengths and weaknesses to bring all students to a zone of achievement or
mastery through constant feedback [5]. Thus, without feedback, whether interactive or post-active,
there is no way of recognizing this deployment as such. Therefore, evaluation -from this orientation-
fulfills teaching purposes and not the purpose of verification interested in the assignment of qualifiers.
This is logical since it would be practically impossible to lead students to achieve competency or
capacity if we do not offer timely, meaningful, and respectful feedback.[5], [7], [8], [9].
13th International Conference on Education and New Learning Technologies. 5-6 July, 2021. Pages:
8327-8331 / ISBN: 978-84-09-31267-2 / ISSN: 2340-1117 / doi: 10.21125/edulearn.2021.1679
Indexed in Web of Science

Therefore, higher education teachers are critical agents since the power to dynamize their students'
work towards relevant achievements would ultimately depend on them. This implies assuming a
pedagogical-didactic approach committed to meaningful tasks and provoking evaluation conditions
very close to the authentic demands of life or professional field [5], [10]. However, despite their
importance, these relevant aspects are not frequent in the country's evaluation practices [11].
In any case, and more recently, the influences of formative assessment have been insistently
projected to higher education in Latin America from classical positions Scriven [12], Stake [13]
Popham [9], [14], and more recently from Rosales' reflective approach [7]. In this way, the benefits of
formative assessment through performance tests (oral, extended response, case resolution, or essay
writing) have been highlighted (at least in the conceptual discourse and with little reflection). Also, and
with less success, teachers would have been trained to practice this type of assessment (applied
dimension) with their students in particular. In any case, and at least in Peruvian universities, the rise
of the competency-based model has renewed interest in the practice of assessment. Away from its
usual stereotypical connotations and related to the simple verification of learning.
We must also consider that evaluation is inextricably linked to teachers' thoughts or beliefs about a
given subject or subject matter [3], [5]. In this context, one can appreciate a series of inconsistencies
that would be present in our classrooms, such as, for example: public speaking courses where written
exams are taken, research subjects that ask students to define what an objective is, or written
expression subjects where grammar exams are applied [5]. Here, we see that there would be no
realism, no contextual validity or social relevance of the evaluation. This is still an area of many
research challenges as applied to higher education [8,]. However, it stands out as important to study
teachers' thoughts, implicit theories, and beliefs regarding teaching and assessment to develop
essential change strategies that will translate into better practices [5], [3].
It should also be recognized that a significant number of higher education institutions make
pedagogical decisions based on excessive pragmatism, which in many cases can lead to overloading
the work of professors with classrooms with more students than allowed. This situation is aggravated
when these teachers do not have the opportunity to count on the support of one or more teaching
assistants. This situation is more frequent in private universities and institutes of higher education,
which, in the interest of increasing their profits, puts their teachers' mental health at risk. Thus, many
teachers would opt (also) for multiple-choice tests and the verification of learning from this problem
because it takes less time. Furthermore, other teachers would opt for these multiple-choice tests
because they erroneously consider them objective [5].
Thus, in this framework, it is likely that the intentions to develop a formative assessment that
addresses each student's performance is blurred by the pressure of the teaching architects at the
higher level: teachers. Let us remember that formative assessment with the help of relevant tests
(performance) requires much more time than answer selection tests.

Virtual education in times of pandemic


Peru is a country characterized by widespread educational segregation, where the most impoverished
sectors, rural areas, and those with languages other than Spanish (native languages) have the worst
performance in national learning tests [15], [16]. These problems of primary education are projected
and amplified in higher education, both in university and technological education. In this way, excellent
and insufficient teaching practices are projected to virtual classrooms. In this sense, and concerning
the evaluation of learning in virtual environments in higher education, we could have at least two types
of teachers: (1) those who would strive to maintain their formative disposition, constant feedback, and
the application of performance tests with high cognitive demand; and (2) those who would focus on
checking (measuring) how much was learned through multiple-choice tests (mostly memory based
evaluation) that would be applied, now more efficiently, thanks to the automation of teaching platforms
[17].

Therefore, it is possible that virtual education, forced by the Covid 19 pandemic, is widening the gap
among higher education professors as well since what was not achieved in the classroom will not be
obtained in virtual classrooms. Moreover, if we add to this the excessive pressure that many
professors are under to teach more students per classroom, the progress made in formative
assessment in higher education may be experiencing a setback. In any case, it seems that the initial
13th International Conference on Education and New Learning Technologies. 5-6 July, 2021. Pages:
8327-8331 / ISBN: 978-84-09-31267-2 / ISSN: 2340-1117 / doi: 10.21125/edulearn.2021.1679
Indexed in Web of Science

enthusiasm for the use of technological media in the context of this pandemic is fading in part with the
phenomena of teacher and student discomfort recently investigated in different contexts [18].
In this scenario, the present study analyses student perceptions (from a private university in Lima)
regarding the evaluation practiced by their professors during the first semester of classes.

2 METHODOLOGY
A questionnaire was elaborated and applied to 127 students - from health and management careers -
from a low-cost private university located in a marginal urban area of Lima. These students accessed
the university and began their studies in the context of the current Covid 19 pandemic thanks to the
Moodle Platform and Zoom. Consequently, they responded to this instrument (through Google Forms)
at the end of their first semester of classes in July 2020. This university regularly offers face-to-face
education and, for reasons of the current health crisis, is offering distance education.
The test provides insight into perceptions regarding the type of test and the cognitive demands of the
tests. It also allows to know whether students perceive a willingness to engage with feedback from
their teachers. Finally, it explores preferences regarding the types of tests.

3 RESULTS
Item 1 asks about the way in which you were evaluated in the first semester (how were you
evaluated?). As can be seen, we put the word "tests" in parentheses because evaluation is still
associated with instruments. Thus, of the 126 students, more than 108 acknowledged that they were
evaluated mostly with multiple-choice tests. In second place, not so far behind, 88 students
considered essay response tests. On oral tests, just under 64 considered having answered them.
Finally, and to a lesser extent, 59 students indicated that they had to take case study exams (Table 1).

Table 1. How were the tests you took in the first semester? (Several choices were accepted)

n %
Multiple choice exam 108 85.7
Essay exam 88 69.8
Oral exam 64 50.8
Case study exam 59 46.8

It is worth mentioning that this university has been working since 2015 under the competency-based
model thanks to international consultancies. However, we have been able to register weak policies
within it interested in accompanying its professors in the field of formative evaluation. Therefore, it is
likely that this same trend recorded in virtual teaching is very similar to what used to occur in face-to-
face classes.
Also, there was a question associated with the previous one that referred to how were the multiple-
choice tests they had to answer in the first semester (item 2). It was found that only 1.6% considered
that these tests were exclusively memory based. In any case, it is relevant to highlight those 41
students (32.5%) considered that these tests made them think and reflect. However, 83 subjects
(65.9%) responded that these tests were a mix between memorizing and thinking.
Item 3 (Table 2) questioned students about what type of test they would like to continue answering in
college. In this case we gave them two options: multiple choice exams and essay exams. Thus, we
found that 57.1% were more inclined towards the former and 42.9% towards the latter.
13th International Conference on Education and New Learning Technologies. 5-6 July, 2021. Pages:
8327-8331 / ISBN: 978-84-09-31267-2 / ISSN: 2340-1117 / doi: 10.21125/edulearn.2021.1679
Indexed in Web of Science

Table 2. What type of test or exam would you like to take in the next semesters in college?

n %
Multiple choice exam 72 57.1
Essay exam 54 42.9

In item 4, we asked students what type of tests they considered most important for their professional
development. We found that 44 students (34.9%) answered multiple choice exams and 82 (65.1%)
considered essay exams (Table 3).

Table 3. Which of these tests or exam is most important for your professional training?

n %
Multiple choice exam 44 34.9
Essay response 82 65.1

Finally, item 5 asks directly about the number of teachers who were committed to doing well in their
courses. In this case, we assumed that this question was a reliable indicator of formative evaluation,
understood as an attitude and constant action to procure (thanks to feedback) student achievement. It
is important to mention that in the first semester all students took 6 subjects; and from there the
response options ranged from 1 to 6. We found that 55 students (43.7%) indicated that all their
professors (all 6) were concerned about their learning. Also, 27 (21.4%) considered that 5 of their
teachers had this helpful attitude.

4 CONCLUSIONS
In these times of pandemic by Covid 19, regression processes are likely taking place given the
precariousness of the teaching work, the massification of the classrooms, and the short moments for
adequate feedback. Likewise, this scenario would worsen with the automated possibilities offered by
virtual teaching platforms to create multiple-choice tests; and where the problem would not be due to
their use but rather to their abuse.[17], [11], [5]. In this sense, and an allusion to the above, it is
considered important to develop institutional policies (and training) to explain the uses of technological
means to practice formative assessment in virtual teaching. This, through performance tests such as
oral tests, essay tests, or projects.
Specifically, this study found that students were exposed to multiple-choice tests to a greater extent.
However, it was noted that these tests were largely non-memory-based tests o exams; this could have
several interpretations. Further qualitative research efforts will be necessary to understand what young
students understand by memory-based evaluation thinking-reasoning exams.
13th International Conference on Education and New Learning Technologies. 5-6 July, 2021. Pages:
8327-8331 / ISBN: 978-84-09-31267-2 / ISSN: 2340-1117 / doi: 10.21125/edulearn.2021.1679
Indexed in Web of Science

Also, we found an interesting paradox; students would prefer to continue answering multiple-choice
tests in the future, but they recognized that the essay exams were the most important for their
professional training. This finding is encouraging because there would be a majority awareness that
these exams (essay and oral exams) would be the most relevant. This also outlines the need to carry
out complementary studies to understand the students' rationalities in depth to establish this
consideration.

REFERENCES

[1] D. B. Wayne, M. Green, and E. G. Neilson, “Medical education in the time of COVID-19,” Sci.
Adv., vol. 6, no. 31, 2020, doi: 10.1126/sciadv.abc7110.
[2] F. J. García-Peñalvo, A. Corell, V. Abella-García, and M. Grande, “Online assessment in
higher education in the time of COVID-19,” Educ. Knowl. Soc., vol. 21, pp. 1–26, 2020, doi:
10.14201/eks.23013.
[3] R. Jáuregui, L. Carrasco, and I. Montes, “Evaluando, evaluando:¿ Qué piensa y qué hace el
docente en el aula,” Inf. Final Investig. Univ. Católica St. María. Perú. Recuper. desde
http//cies. org. pe/files/active/0, vol. 204, 2003.
[4] Anne Anastasi, “-Anastasi-Urbina-Test-Psicologicos.pdf.” .
[5] I. Montes-Iturrizaga, Evaluación Educativa: reflexiones para el debate. Madrid: UDL Editores,
2020.
[6] A. Friedman, I. Blau, and Y. Eshet-Alkalai, “Cheating and Feeling Honest: Committing and
Punishing Analog versus Digital Academic Dishonesty Behaviors in Higher Education,”
Interdiscip. J. e-Skills Lifelong Learn., vol. 12, pp. 193–205, 2016, doi: 10.28945/3629.
[7] C. Rosales, Evaluar es reflexionar sobre la enseñanza, Tercera. Madrid: Narcea Ediciones,
2014.
[8] E. Garza, “La Evaluacion Educativa,” Rev. Mex. Investig. Educ., vol. 9, no. 23, pp. 807–816,
2004, [Online]. Available: http://www.redalyc.org/articulo.oa?id=14002302.
[9] W. J. Popham, Evaluación trans-formativa: El poder transformador de la evaluación formativa.
Narcea Ediciones.
[10] E. W. Eisner, Estándares para las escuelas norteamericanas : ¿ Ayuda u obstáculo? Programa
de Promoción de la Reforma Educativa en América Latina y el Caribe (PREAL) - Grupo de
Trabajo sobre Estándares y Evaluación.
[11] M. Z. Joya Rodríguez, “La evaluación formativa, una práctica eficaz en el desempeño
docente,” Rev. Sci., vol. 5, no. 16, pp. 179–193, 2020, doi: 10.29394/scientific.issn.2542-
2987.2020.5.16.9.179-193.
[12] M. Scriven, “The methodology of evaluation,” in Perspectives of curriculum evaluation AERA
Monograph Series on Curriculum núm. 1, AERA., R. Tyler, R. W., Gagne and M. M. y Scriven,
Eds. Chicago: Rand McNally, 1967.
[13] R. Stake, “The countenance of educational evaluation,” Teach. Coll. Rec., vol. abril, pp. 523–
540, 1967.
[14] W. J. Popham, Classroom Assessment, What Teachers Need to Know, Eigtht Edi. Los
Angeles: Pearson, 2017.
[15] E. León, El fenómeno ECE y sus efectos en las prácticas docentes, Primera Ed. Lima, 2017.
[16] C. Guadalupe, J. S. Rodríguez, and S. Vargas, Estado de la educación en el Perú. Análisis y
perspectivas de la educación básica, Primera ed. Lima: Grupo de Análisis para el Desarrollo
(GRADE), 2017.
[17] I. Montes-Iturrizaga, “La evaluación en la universidad en tiempos de la virtualidad: ¿retroceso u
oportunidad?,” Revista Signo Educativo, Lima, Dec. 2020.
13th International Conference on Education and New Learning Technologies. 5-6 July, 2021. Pages:
8327-8331 / ISBN: 978-84-09-31267-2 / ISSN: 2340-1117 / doi: 10.21125/edulearn.2021.1679
Indexed in Web of Science

[18] J. A. Román, “La educación superior en tiempos de pandemia: una visión desde dentro del
proceso formativo,” Rev. Latinoam. Estud. Educ., vol. 50, no. ESPECIAL, pp. 13–40, 2020, doi:
10.48102/rlee.2020.50.especial.95.

You might also like