Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/286583299

Formative Assessment and Meaningful Learning Analytics

Conference Paper · July 2014


DOI: 10.1109/ICALT.2014.100

CITATIONS READS

8 198

6 authors, including:

Susan Bull Matthew David Johnson


University of Birmingham University of Birmingham
147 PUBLICATIONS   3,415 CITATIONS    24 PUBLICATIONS   168 CITATIONS   

SEE PROFILE SEE PROFILE

Carrie Demmans Epp Mohammad Alotaibi


University of Alberta University of Birmingham
64 PUBLICATIONS   303 CITATIONS    5 PUBLICATIONS   25 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Adaptive Tutoring for Indigenous Languages View project

Mobile Adaptive Vocabulary and Communication Support for English Language Learners View project

All content following this page was uploaded by Carrie Demmans Epp on 01 June 2016.

The user has requested enhancement of the downloaded file.


Published by the IEEE

Bull, S., Johnson, M. D., Demmans Epp, C., Masci, D., Alotaibi, M., & Girard, S. (2014). Formative assessment and meaningful
learning snalytics – An independent open learner model solution. In IEEE International Conference on Advanced Learning
Technologies (ICALT) (pp. 327–329). Athens, Greece: IEEE. http://doi.org/10.1109/ICALT.2014.100

Formative Assessment and Meaningful Learning Analytics

Susan Bull, Matthew D. Johnson, Carrie Demmans Epp, Drew Masci, Mohammad Alotaibi and Sylvie Girard
Electronic, Electrical and Computer Engineering, University of Birmingham, United Kingdom
s.bull@bham.ac.uk

Abstract—We introduce an independent open learner model fuller range of activities a student might undertake, that con-
for formative assessment and learning analytics based on de- tribute to the learner model. The learner model in the Next-
velopments in technology use in learning, also maintaining TELL IOLM is based on competency structures. Various
more traditional numerical and qualitative feedback options. activities, defined by the teacher, allow data to be collected
as students interact: these may be online activities, using the
IOLM API; or may be traditional feedback from self, peer or
I. INTRODUCTION
teacher assessments. Fig 1 shows four of the Next-TELL
With development of technologies there have been many IOLM visualisations to illustrate how it can be used in form-
changes in the ways that learning and assessment take place. ative assessment. Skill meters provide a clear, simple, indica-
Nevertheless, there is still room for approaches that can read- tion of strength of competencies (and sub-competencies).
ily encompass the variety of new technologies, as well as Especially for small competency sets, these can easily show
accommodating more traditional formative assessment and learners and teachers the extent of the various competencies
feedback. Individual ‘learner models’ are inferred and dy- measured by the various activities and/or technologies. Word
namically updated during an interaction. These enable adap- clouds are useful when there is a large number of competen-
tive teaching systems to tailor the interaction to the individu- cies. This is not ordered, but for quick viewing, it can help
al [10]. Much as visualisation in learning analytics displays users identify strengths (left) and difficulties (right). It can be
learning data to users (e.g. [8]), ‘open learner models’ exter- a useful way to identify where to focus effort or, if a student
nalise data from learner models [3], and so can be regarded releases their model to others, for peers to quickly note who
as a specific application of visual learning analytics. Fur- might be able to help with reference to a specific topic or
thermore, as it is the learner model that is visualised, data competency. The competency network shows the competen-
typically relates directly to understanding, skills and compe- cy structure, with shade and size of nodes indicating relative
tencies, etc. While this can be achieved for teachers using strength of competencies. This also allows users to easily see
learning analytics data ([4]), learner models can very natural- areas of the structure that are under-represented in their skill-
ly make their inferences about learners available to students set, as well as how their strengths are distributed. The radar
and teachers, since the learner model is an inherent and often plot enables comparison of data sources in the model, for
core component of the adaptive system’s design. example, comparing self-assessment to data perhaps per-
Independent open learner models (IOLM) are not part of ceived as more ‘expert’, such as teacher assessments or au-
adaptive teaching systems: the independence of the OLM tomated tools. Users can also view the influence of the learn-
indicates that it is for use on its own. The aim is to focus on er model algorithm (Fig 2). This example shows evidence of
metacognitive skills that help in the learning process, allow- contribution of different sources of data together with recen-
ing learners to assume greater control and responsibility for cy weighting from manual peer and teacher assessments, and
decisions in their learning [3]. An IOLM may be: part of a automated data from an online tool (OLMlets [2]). The Next-
system that has tasks or questions ([2]); connected to an e- TELL IOLM can support formative assessment in several
portfolio ([6]); or take data from a range of sources ([7]). We ways. The first is by viewing the visualisations. As indicated
here focus on an IOLM to integrate data from a variety of above, these can be used for different purposes: quick identi-
sources (activities, applications, etc.), to help learners and fication of areas of strength or weakness; more detailed in-
teachers understand students’ learning; and to offer opportu- spection of skills with reference to one or more competency
nities for formative assessment combined with learning ana- frameworks used in the IOLM; comparison of learner model
lytics, as argued beneficial [1] – but with greater focus on data from different sources. Each can serve as a prompt for
learning or current competencies than performance. It also learners to assess their needs and determine suitable follow-
uses self and peer assessment as suggested, for example, in on activities (online activities, searches, reading, collaborat-
online learning [8]. That learners have to themselves use the ing, asking the teacher, self-assessment, etc.).
visualised data, and identify what they need to do to improve Previously, a simple IOLM has been found to prompt
their learning, is part of the learning process. spontaneous face-to-face discussion about the learner mod-
els, when they could be optionally released to peers [2]. This
II. THE NEXT-TELL IOLM can also be the case with the Next-TELL IOLM, but follow-
Reports on the potential of (I)OLMs to be at the centre of ing the earlier finding, it also includes a discussion compo-
contexts with learner data available from a range of sources, nent where students can comment on their learner models or
are increasing [5,6,7]). This means that starting points for request assistance, etc., when they are not in the same loca-
reflection or formative assessment can take account of a tion. For example, Fig 3 shows an excerpt from discussion
prompted by the instructor, based on the learner models. This If the IOLM is used throughout a course, there can be
excerpt demonstrates that students can appropriately discuss many formative assessment opportunities. Any activities that
their understanding, and clarify their understanding to them- use the IOLM API can contribute to the model calculation,
selves and each other. If considered useful, the instructor can and appear in the visualisations. In this case students used
‘add evidence’ to the IOLM in the form of a numerical val- another IOLM (OLMlets [2]), which provided data to their
ue, as well as additional text feedback as evidence. This can Next-TELL IOLM. This was combined with data from
further focus students back to the discussion for elaboration, teacher assessments of individual and group texts; and self
but can also be a way to acknowledge improved understand- and peer assessments. In addition, written feedback was pro-
ing in the IOLM itself, that could be a motivating factor. vided. Performing self-assessment and comparing against
Students can perform self and peer assessments in a similar ‘expert’ teacher assessments can be useful to gauge the accu-
manner. Students and teachers can view this evidence by racy of one’s self-assessment. Peer assessments can also help
clicking the text icon (by the skill meters in Fig 1). the giver of feedback, in the process of appraising the work
of others. Collecting this data across activities and technolo-
gies also allows continuing formative assessment to take a
large proportion of a learner’s activities into account in
prompting reflection through learner model visualisations.
III. USE OF THE NEXT-TELL IOLM
Potential benefits of (I)OLMs have been previously de-
scribed [3]. We here consider the extent of use of the IOLM
as an indicator of perceived benefit for formative assessment.
Figure 1. Examples of Next-TELL IOLM visualisations Participants were 11 volunteers studying Computer Sys-
tems Engineering, from two courses: Personalisation and
Adaptive Systems; and Adaptive Learning Environments.
The learner models represented competencies and topics
from the relevant course, but data from both groups is com-
bined here. The IOLM was introduced in labs, but mostly
used in students’ own time over 8 weeks. Use was encour-
aged as a means to obtain formative feedback, but no sum-
mative assessment was based on it. The discussion section
began late in the 8 week period. User actions in the system
are automatically logged. Questionnaires were administered
at the end of the courses, using a 5 point scale (strongly
agree (5) – strongly disagree (1)). A ‘not applicable’ option
was also available. A few questionnaire responses for two of
the students were removed, where those students had rated
visualisations, but interaction logs showed no use of them.
The questionnaires only asked about 6 of the 8 visualisa-
Figure 2. Data sources and learner model weightings
tions, but the logs were reviewed for all eight visualisations.
________________________________________________
Looking at high-level learner actions within the IOLM
Instructor: Cognitive style is one of the two topics still in the ‘very weak’ (Table I), we see that all viewed their IOLM (any of the vis-
position for the group as a whole, according to the table OLM view. Are ualisations), inspected the modelling process, and viewed the
there specific difficulties that you are encountering here?
Student 1: Cognitive style is how information is processed and stored in evidence. The logs also revealed that 4 participants inspected
memory, for example there are two types of styles of learning, the is the the modelling process more than twice as much as any of the
deep and surface approach of learning. the surface approach is trying to other learners. The majority of learners also viewed and par-
memory and reproduce of material, often without having much knowledge ticipated in discussions. All participants completed at least
about the subject and how the memorised materials fit together. But in deep
approach is the opposite because a student would have a deep understand one peer or self assessment, with most completing several.
and a lot more knowledge about the subject and make sense of the materials Table II shows the most commonly completed part of an
instead of just memorising, and not knowing how the things you memorised assessment as the numerical value. Around half also entered
work. That’s my understanding of cognative learning style. text describing strengths or possible improvements. Several
Student 2: Definitely the deep approach of learning style is where the learner
tries to link his learning to what he/she already knows or believes in. They instructor assessments by competency were also given (per
try to use the evidence to support their learning and fit it in their schema. student: M=16.7, SD=7.7, Min=6, Max=11). The question-
Instructor: But is there some confusin between cognitive and learning style? naire revealed general agreement with statements of useful-
Student 2: Learning style is the method/ way you learn the information ness, as in Table III. All stated at least one of the visualisa-
however cognitive style is the way you process the information in your
brain. Am i ok with this as a basic understanding? tions was useful for learning (strongly agree/agree), with
________________________________________________ most claiming several to be helpful. Responses were mixed
Figure 3. Excerpts from the discussion facility in the Next-TELL IOLM
for usefulness of assessments, but few negative responses.
TABLE I. HIGH LEVEL LEARNER ACTIONS aim to more actively encourage use of this feature to facili-
No. Times Performed No. tate formative assessment sooner in future courses.
Learner Action M (SD) Min Max Learners Questionnaires revealed higher preferences for some
Viewed IOLM Visualisation 64.2 (50.9) 20 165 11 visualisations, but all were considered useful by at least 3
Viewed Mod. Process 5.8 (5.4) 1 18 11 users, which provisionally suggests all visualisations can be
Viewed Evidence 4.1 (4.1) 1 12 11 retained. Instructor feedback appears to have greater atten-
Viewed Discussion 26.1 (33.7) 0 119 10 tion, perhaps in line with lower completion of self and peer
Posted to Discussion 6.2 (7.1) 0 22 9
assessments, or perceived accuracy of assessment. Never-
TABLE II. ASSESSMENT RELATED LEARNER ACTIONS theless, giving peer feedback was considered more helpful
for one’s own learning than receiving it, and this was with-
No. Times Performed No. out additional support to complete peer assessments. Most
Learner Action M (SD) Min Max Learners considered the automated (OLMlets) data helpful: this may
Self Assessment 5.8 (5.0) 0 16 10 increase if additional applications contribute to the IOLM.
Peer Assessment 3.6 (5.7) 0 19 8
While we have yet to investigate quality of self/peer as-
Numerical Value 98.7 (72.1) 11 262 11
Indicated Strengths 5.6 (14.3) 0 48 5
sessments, data so far suggests the potential for IOLMs to
Indicated Improvements 4.5 (10.7) 0 36 5 combine formative assessment and learning analytics for
immediate feedback (see [1]), with additional focus on cur-
TABLE III. LEARNER PERCEPTIONS OF IOLM USEFULNESS rent competencies; and allows inclusion of more traditional
IOLM Use Useful Neutral Not Usef. N/A teacher feedback, as well as self and peer assessments that
Network 4 4 2 1 can themselves be useful formative assessment processes.
Skill Meter 11 0 0 0
Smiley 6 2 1 2 ACKNOWLEDGEMENTS
Table 8 2 1 0 The Next-TELL project is supported by the European Community (EC)
Tree Map 6 1 2 2 under the Information Society Technology priority of the 7th Framework
Programme for R&D, contract no 258114 NEXT-TELL. This document
Word Cloud 3 5 1 2
does not represent the opinion of the EC and the EC is not responsible for
Self Assessment 5 4 0 2 any use that might be made of its content. The third author held a Walter C.
Instructor Assessment 11 0 0 0 Sumner and a Weston Fellowship.
Viewing Peer Assessment 3 4 2 2
Giving Peer Assessment 5 3 1 2 REFERENCES
Automated Assessment 7 3 1 0 [1] N.R. Aljohani and H.C. Davis, “learning Analytics and Formative
Assessment to Provide Immediate Detailed Feedback Using a Student
Discussion: Over the 8 weeks, users accessed the visual- Centered Mobile Dashboard”, International Conference on Next
isations (Fig 1) 64.2 times on average. The SD was high, Generation Mobile Apps, Services and Technologies, IEEE, 2013.
indicating different use patterns: some may check frequently [2] S. Bull and M. Britland, “Group Interaction Prompted by a Simple
Assessed Open Learner Model that can be Optionally Released to
for updates, or switch between visualisations for different Peers”, Proceedings of PING Workshop, User Modeling 2007.
views on information; others checking less frequently. This [3] S. Bull, & J. Kay, “Open Learner Models”, in R. Nkambou, J.
is in line with the aim of a flexible tool as a focus for forma- Bordeau & R. Miziguchi (eds), Advances in Intelligent Tutoring
tive assessment, as suits the individual. Users viewed the Systems, Springer, 318-338, 2010.
modelling process (Fig 2) quite frequently given that it is [4] M. Ebner, M. Schoen and B. Neuhold, “Learning Analytics in Basic
mainly a source of data for additional scrutiny and explana- Math Education – First Results from the Field”, eLearning Papers no.
36, http://openeducationeuropa.eu, 2014.
tion of the learner model, according to the modelling algo-
[5] R. Morales, N. Van Labeke, P. Brna, and M.E. Chan. “Open Learner
rithm. It is hypothesised that students used this to compare Modelling as the Keystone of the Next Generation of Adaptive
their competencies derived from different activities. What- Learning Environments”. In C. Mourlas & P. Germanakos (eds),
ever the reason, we recommend developers include such Intelligent User Interfaces, ISR, 288-312, London: ICI Global, 2009.
explanation where users are at a level to comprehend it, as [6] E.M. Raybourn and D. Regan, “Exploring e-portfolios and
Independent Open Learner Models: Toward Army Learning Concept
most of our users checked this multiple times. All evidence 2015”, Interservice/Industry Training, Simulation, and Education
is listed in the same page, and mean viewings suggest users Conference Proceedings, Florida USA, 2011.
may have checked this every two weeks. The fact that the [7] P.Reimann, S. Bull, W. Halb and M. Johnson, “Design of a
discussion component (Fig 3) was introduced later may help Computer-Assisted Assessment System for Classroom Formative
explain the SD, or this may reflect individual differences in Assessment”, CAF11, IEEE, 2011.
preference for discussion tools. Either way, this suggests the [8] K.Verbert, E.Duval, J. Klerkx, S. Govaerts and J.L Santos, “Learning
Analytics Dashboard Appliclations”, American Behavioral Scientist,
potential for this feature for those who benefit from it. All DOI: 10.1177/0002764213479363, 2013.
performed at least one self or peer assessment, with most [9] S.K. Vonderwell and M. Boboc, “Promoting Formative Assessment
doing both. Users were more inclined to give numerical in Online Teaching and Learning”, TechTrends, vol 57, 4, 2013.
values, but 5 of the 11 provided additional comment. We [10] B.P. Woolf, “Building Intelligent Interactive Tutors”, Elsevier Inc,
Burlington MA, 2009.

View publication stats

You might also like