Download as pdf or txt
Download as pdf or txt
You are on page 1of 124

DEGREE PROJECT IN TECHNOLOGY,

FIRST CYCLE, 15 CREDITS


STOCKHOLM, SWEDEN 2021

An investigation on the impact of


colour coding on student retention
in online lectures

KRISTIN MICKOLS

MAX WIPPICH

KTH ROYAL INSTITUTE OF TECHNOLOGY


SCHOOL OF ELECTRICAL ENGINEERING AND COMPUTER SCIENCE
An investigation on the
impact of colour coding on
student retention in online
lectures

KRISTIN MICKOLS, MAX WIPPICH

DA150x Degree project, first cycle (15 hp)


Date: June 23, 2021
Supervisor: Richard Glassey
Examiner: Pawel Herman
EECS/KTH
Swedish title: En undersökning av hur färgkodning påverkar
studenters retention under digitala föreläsningar
3

Abstract
Online education has become more prominent during recent years and it is
therefore of interest to study and improve. Colour in presentations have shown
to increase the attention and retention among students and colour coding can
use colour to highlight important key words and topics in online presentations.
As part of this study experiments were conducted with 16 computer science
students. Some were shown colour coded presentation images and some were
shown the same images without colour and were then asked to answer ques-
tions about the content. An eye tracker was also used during the experiment to
gather quantitative data that was analysed with respect to eye-movements and
gaze fixations. The main conclusions drawn from this study are that colour
coding is positive as long as it is used sparsely and that too many different
colours may affect students negatively, which is in line with previous research.
4

Sammanfattning
Digital undervisning har under de senaste åren blivit allt mer framstående och
det är därför av intresse att undersöka möjligheter att förbättra denna. Använ-
dandet av färg i föreläsningar har bevisat en ökning av uppmärksamhet och bi-
behållande av information bland studenter. Färgkodning är ett sätt att använda
färg för att markera viktiga nyckelbegrepp och ämnen i digitala föreläsningar.
Som en del av studien utfördes experiment med 16 datateknikstudenter. Vissa
av dessa presenterades föreläsningsslides i svartvitt medan andra presentera-
des slides med färgkodning och fick sedan besvara frågor kring materialet.
För att tillföra kvantitativ data användes även en eye tracker under experimen-
tet vars data analyserades med hänsyn till ögonrörelse samt fixationspunkter.
Huvudslutsatserna är att färgkodning är gynnsamt såvida det används sparsamt
och att för många olika färger kan påverka studenter negativt, vilket överens-
stämmer med tidigare forskning.
Contents

1 Introduction 1
1.1 Research Question . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

2 Background 3
2.1 Online Education, Attention and Retention . . . . . . . . . . . 3
2.2 Colour Coding and Readability of digital content . . . . . . . 4
2.3 Eye tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.4 Previous research . . . . . . . . . . . . . . . . . . . . . . . . 6

3 Method 8
3.1 Experiment Overview . . . . . . . . . . . . . . . . . . . . . . 8
3.2 Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.3 Materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.4 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

4 Results 14
4.1 Survey Results . . . . . . . . . . . . . . . . . . . . . . . . . 14
4.2 Heat Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.3 Eye Tracker Data . . . . . . . . . . . . . . . . . . . . . . . . 23

5 Discussion 24
5.1 Discussion of Results . . . . . . . . . . . . . . . . . . . . . . 24
5.2 Implications for theory and practice . . . . . . . . . . . . . . 27
5.3 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
5.4 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

6 Conclusions 29

Bibliography 30

5
6 CONTENTS

A Presentations 34
A.1 Group A . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
A.2 Group B . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

B Questions & Survey Answers 53


B.1 Questions about lecture slides . . . . . . . . . . . . . . . . . 53
B.2 Meta questions . . . . . . . . . . . . . . . . . . . . . . . . . 83

C Heat Maps 87
C.1 Heat maps, coloured images . . . . . . . . . . . . . . . . . . 87
C.2 Heat maps, black and white images . . . . . . . . . . . . . . . 96

D Python Code 106


D.1 Code to collect data . . . . . . . . . . . . . . . . . . . . . . . 106
D.2 Code to analyse data . . . . . . . . . . . . . . . . . . . . . . 108
Chapter 1

Introduction

Online education where students and teachers meet through various forms of
communication over the internet has become progressively more prominent
in schools and universities. The rise of online education may be a result of
different factors, such as the development of digitalisation and accessibility to
these services. As of today, it is common for universities to use online learning
management platforms where the students can hand in assignments, as well as
see their schedule and be alerted of assignments and deadlines [1]. Online
services such as YouTube videos with educational purpose and websites that
offer interactive education have been more common [2], where students can
learn about new topics from home or further their understanding of concepts
discussed in class.
During 2020, the covid-19 pandemic forced a majority of schools [3] to
find online alternatives to the education, which have been done in primarily
two ways – live online lectures and pre-recorded lectures. It is common for
lecturers to record their live online lectures and make the recordings available
for students to watch in retrospect [4]. This has made it possible for students
to further study concepts and walk through a lecture at their own pace [5].
To ensure that the quality of online education is the same as on-site educa-
tion, some adaptations have to be done. Research has proven online lectures to
have lower attention and retention among students compared to traditional lec-
tures [6]. It is therefore important to study how the content of online education
can be improved to increase students’ ability to retain information. According
to previous research [7], a possible improvement to the layout of lectures could
be colour coding, which enables the lecturers to highlight certain key points
within the lecture material, with the expectations to improve students’ reten-
tion. This is what this study aims to investigate, how such changes, especially

1
2 CHAPTER 1. INTRODUCTION

colour coding affects students retention of lecture material. To investigate the


impact of colour coding, an eye tracker will be used to gather more quantitative
data and give the opportunity to analyse the students’ gaze.

1.1 Research Question


How does colour coding key points in lecture presentations during online
classes impact student retention of lecture content?

1.2 Scope
This research investigates whether colour coding has an impact on students’
retention during online lectures where lecture slides are utilised. The research
is focused solely on lectures within the subject programming and computer
science and does not investigate results in other subjects. Furthermore, there
will be no analysis on what colours have the most impact on retention, although
the colours picked are based on previous research.
Chapter 2

Background

In this chapter, background regarding online education and its effect on reten-
tion compared to physical education will be presented in section 2.1, Online
Education, Attention and Retention. The next two sections 2.2, Colour Coding
and Readability of Digital Content & 2.3 Eye Tracking discuss the effects and
usages of colour coding and analysis of eye tracking data within educational
studies. Finally, section 2.4, Previous Research will present some previous
research within this field of study.

2.1 Online Education, Attention and Reten-


tion
Online education can be defined as either fully online education or blended
online education. Fully online learning is distance education where all course
material, resources and examination is taking place online, whereas blended
learning is a combination of face-to-face learning and digital resources [8].
Educations that use learning management platforms with physical lectures and
classes can be considered blended [1]. A model within fully online education
is massive open online courses, MOOCs, its main advantage being it accessi-
bility, as it is accessible to more students in comparison to courses offered on
site. MOOCs are offered within multiple different topics and by well respected
institutions, including Harvard and Massachusetts Institute of Technology [9].
One of the main advantages of online education is the flexibility; students
can plan their studies in a way that fits their schedule, geographic location
is not a concern and students can study at their own pace [10]. Although the
interest and enrollment in online courses has increased over the years, students

3
4 CHAPTER 2. BACKGROUND

are likely to drop their courses due to low retention rates, procrastination and
lack of motivation [11]. According to Onah et al. [9], massive open online
courses have a completion rate as low as 13%.
During physical lectures, lecturers often have the possibility to be more
interactive with their learning material. They have the possibility to highlight
certain topics or words of importance by either drawing on the chalkboard or
have more emphasis while talking about the topic. During online education
this interaction is lost if lecturers share their educational content through video
or online meetings. Additionally lecturers have less interactivity with their
students as they often cannot directly see how students react or what they are
doing during the lecture.
Previous research shows that it is easier to lose attention while attending
an online lecture, as opposed to a physical lecture as the environment in which
the student studies often is more prone to introduce distractions [6]. Other
research also shows that student engagement is lower during online classes
[12]. In these studies a traditional lecture format, with a lecturer presenting
with notes, was used. This suggests that teachers need to format online classes
with special care, in order to keep students’ focus on the material.
Although the development and usage of online education was already on-
going, online education has become prominent due to the covid-19 pandemic
drastically changing the circumstances revolving education. This forced rapid
adjustments and adaptations regarding education that had to move online. [13]

2.2 Colour Coding and Readability of digital


content
The issues of lower attention and engagement during online education may
be tackled using different techniques and one of these techniques could be a
wider use of colour when creating lecture content [14]. Colour coding has
shown to increase memory performance in previous research [15]. During
online education it is possible to use a wide range of colours available to a
computer, when presenting and creating educational content. This opens up
new opportunities as opposed to using a white- or blackboard in a classroom,
but it might be difficult to use these colours correctly in order to enhance the
learning experience. Two concepts that make use of colour in order to improve
visual material is text highlighting and colour coding. In this study text high-
lighting will refer to the use of colour in order to highlight a certain word in
a text, by making the colour stand out in contrast to other words in the text.
CHAPTER 2. BACKGROUND 5

In this study this will be limited to the actual font being of a different colour
and omit the case where the background of a word is coloured. Colour coding
in this study will refer to the use of highlighting two or more words of similar
meaning to emphasise their connection and help the reader find the other word
when reading the first.
When creating digital content a few guidelines, supported by a previous
study conducted by Deubel et al. [16] in 2003, can be followed to increase
the readability of the material, while avoiding cognitive overload. Deubel
shows that contrast between foreground text and the background should be
high and can be even higher for highlighted text. They also show that back-
ground colours should be neutral, such as pastel or grey colours, to improve
readability. Additionally some colour combinations are advised against, such
as blue & orange, red & green and violet & yellow and highlights also should
not be used too often, as not to confuse the reader. Colour coding should also
be used with caution since colour coding may cause the reader to associate a
certain concept with one colour, which may cause confusion if this colour is
encountered again when used for a different concept [17].

2.3 Eye tracking


An eye tracker is a device for tracking and recording eye movements. The
most common technique used for eye tracking today is Pupil Center Corneal
Reflection, PCCR [18]. PCCR uses near infrared light projected into the pupil,
which causes reflections in the cornea that a camera picks up and tracks. Eye
tracking can be used for various purposes, including psychology, usability an-
alysts and electrical engineers [18]. It can also be used by video game players
to improve their performance while playing [19]. Eye trackers are either re-
mote (screen based) or mobile (head-mounted). For remote eye trackers, the
participant sits in front of a screen. Mobile eye trackers are mounted on a pair
of glasses, recording eye movement from a close range and allowing the par-
ticipant to walk freely. A disadvantage by using head-mounted eye trackers is
that they can move during the recording, giving a misleading result [18].
The data extracted from eye trackers can be in the form of heat maps, rep-
resenting the distribution of visual attention [20]. This method can be used to
analyse how users view coloured and non coloured text, and if there is a differ-
ence between these two. Data can also be extracted as scan paths, a directed
path formed between gaze fixations of text area of the screen [21].
6 CHAPTER 2. BACKGROUND

2.4 Previous research


According to a study conducted by Guo et al. in 2014 [22], colour in general
has a positive impact on retention and increases the chances of retaining learnt
information. Using colours in lecture notes and presentations could therefore
be a valuable resource to teachers to overcome the obstacles of online educa-
tion.
Ozcelik et al. conducted a study in 2009 [7], where 52 students were di-
vided into two groups and each shown a multimedia image with different la-
bels and an explanatory text. One group was shown the image in black and
white while the other group was shown the same image with colour coded key
words that were coloured in the same colour in the image and the text. These
students were also equipped with an eye tracker to show how fast they found
the connection between the terms in the text and corresponding piece of the
image. The study concluded that colour coding in particular has a positive
impact on student retention. It also showed that students who were presented
colour coded text in conjunction with images had an easier time finding rele-
vant information and spent longer time looking at colour coded words, which
suggests a deeper processing of these terms.
Another similar study was conducted by Olurinola & Tayo in 2015 [23].
In this study 30 participants were divided into three groups. Each group was
shown a set of words either in congruent colours, in black and white or in in-
congruent colours. Congruent colours means that the colours were connected
to the word, while incongruent colours means the opposite, for example the
word ’yellow’ coloured yellow is coloured congruently, while the word ’dan-
ger’ coloured blue would be coloured incongruently. After participants were
shown these words they were asked to remember them. The study concluded
that participants showed the words coloured congruently showed a higher re-
tention of these words, which lead to the conclusion that colours have the po-
tential to improve understanding, retention and recollection information.
In previous research, eye tracking has been used to investigate retention of
digital learning content [21], with multiple studies focusing on programming
and code readability. Fan et al. [24] used eye tracking data to investigate pro-
gram comprehension and concluded that commented code enabled program-
mers to process larger chunks of code, and concluded that eye tracking data
could be valuable for learning and teaching computer programming languages.
Studies within retention using eye tracking technology has been conducted
within other fields of education. KoÊ-Januchta et al. [25] compared cognitive
style on learning with texts and pictures, concluding that image learners more
CHAPTER 2. BACKGROUND 7

easily could distinguish relevant topics from irrelevant. Molina et al. [26]
also conducted a study, researching how the design of multimedia materials
can influence in the learning efficiency within primary education. In these two
studies, eye tracking was used as objective data gathering technique.
Lai et al. [27] conducted a large study on how eye-tracking technology
has been applied to studies of learning, analysing 81 papers within the field,
concluding that eye tracking is valuable for educational researches to connect
cognitive processes and learning outcomes.
Chapter 3

Method

In this chapter the method of the study will be described. The first section
3.1 Experiment Overview will give a general overview of the experiments that
were conducted and how they were conducted, as well as giving justification to
these methods. Section 3.2 Process, describes the process of the experiment
in greater detail and expands on certain aspects of the experiment. Section 3.3
Materials, describes the materials, hardware and software used to conduct the
experiment as well as analysing the results. Finally section 3.4 Limitations,
will focus on predicting problems and errors that might affect the result.

3.1 Experiment Overview


To investigate the possibilities of colour coding within online lectures, a study
was conducted where a group of 16 participants were selected to perform an
experiment where they were shown presentations in which some parts were
colour coded and some were not. This methodology was chosen to be similar
to that of Ozcelik et al. [7] and that of Olurinola & Tayo [23]. The 16 partici-
pants chosen for the study were primarily students studying the first or second
year of the Computer Science programme at the Royal Institute of Technology
in Sweden (KTH). Some students from the third year of the same programme
were also chosen. First and second year students were preferred due to their
previous knowledge within the presented areas more likely being limited in
comparison to students studying later years. The participants were divided
into two groups, A and B, and were shown a presentation with images and text
explaining topics related to computer science, similar to a lecture format. The
format was to be similar to a regular lecture presentation to recreate a realistic
environment similar to an online computer science class, as well as keeping

8
CHAPTER 3. METHOD 9

them within the scope of the study. Topics discussed in the presentation were
data structures and functions in programming. Both groups were shown some
sections of the presentation with colour coding and some without, so that one
of the groups was shown the section with colour coding and the other was
shown the same section without colour coding, and vice versa. This ensures
that the data collected shows the difference in retention for the two types of
presentation, independent of the content of the material, similarly to the two
studies conducted by Ozcelik et al. [7] and Olurniola & Tayo [23]. Addition-
ally, since the test group is fairly small in comparison to the 52 students in the
study conducted by Ozcelik et al. [7], showing all participants presentation
images with and without colour coding may reduce the impact on the result
of one test group performing better on average, due to previous knowledge.
The presentation that participants were shown had four sections, the first two
sections describing two different, existing data structures with illustrations of
these and the description of their uses. The last two sections contained two
collections of functions with similar functionality, but nonsensical names, the
first such collection also had an illustration to exemplify their functionality.
The first type is motivated by the fact that they are similar to what an actual
lecture could contain, which causes them to be more realistic. However, as
they build upon previous knowledge and some answers to questions can be
deduced by intuitive logic they have the weakness of providing less exact re-
sults, as the participants may have a different understanding of the concepts
previous to the study. The other type overcomes these weaknesses, by not
building on previous concepts nor being logical, preventing previous experi-
ence to affect the outcome. Yet, as a consequence, these sections are slightly
less realistic.
Colours in these presentations were used according to the results presented
by Deubel et al. in 2003 [16], most importantly with a high contrast to the
background colour, to ensure that the effect of colour coding would not be
impacted by poor choice of colour. After watching each presentation, the par-
ticipants filled out a form with questions about the topics discussed in that
presentation. These questions and answers were later corrected and used as a
quantitative indication of the participants’ retention of the material. In com-
parison to the study conducted by Ozcelik et al., where participants were asked
if they remembered a certain concept [7], this study required full answers that
were corrected. This was to ensure that participants would not incorrectly
think that they remembered a concept and therefore would grant more correct
results. This is more in line with the study conducted by Olurniola & Tayo,
where participants were asked to recall a list of words [23]. After the main
10 CHAPTER 3. METHOD

section of the study the participants were asked a few more questions about
the presentations and questions, such as which sections they found more diffi-
cult and if the colour in the presentations helped them understand the content.
The answers to these questions were compiled to see if participants found par-
ticular sections harder or easier depending on if they were colour coded or not
and if participants’ experienced colour coded parts as easier. This was to see
if the participants, regardless of how well they performed in retaining infor-
mation experienced that the colour was helpful for them in understanding the
material. Even if the test does not give any conclusive evidence that students’
retention increases due to colour coding, the experience that colour helps may
still be of importance.
During the test the participants were equipped with an eye tracker that col-
lected data on the movement and position of participants’ gaze as was done
in the study conducted by Ozcelik et al. [7]. A script continuously collected
the position of the participants’ gaze during each of the presentation images.
In addition to the answers to questions given, this data could be analysed to
show differences in how participants viewed the material, depending on if they
were shown it with colour coding or without. This data could later be used to
generate heat maps and to analyse the data. Heat maps are used in this study
as they are a simple way of visualising the large amount of data an eye tracker
produces. In addition to heat maps the average speed of participants’ gaze is
analysed. Ozcelik et al. chose to analyse when participants gaze landed on
certain areas or words [7] using a scan path. However, due software limitation
this research instead focuses on generating heat maps and analysing the data
using other techniques.

3.2 Process
At the start of each experiment participants were told in a general manner
what the study was about and what were expected from them. They were told
to watch the presentation as if it was a regular lecture and that they would be
asked to answer questions on each part of the lecture. However, participants
were not told in advance that the study investigated the impact of colour cod-
ing in lectures, as to not introduce bias. Instead participants were only told
that the study investigated online lectures, using eye tracking technology. The
participants were then informed about the eye tracker which was calibrated to
their eyes. After this the participants were shown all lecture parts, with each
section followed by the corresponding questions. Finally the concluding ques-
tions were presented after which the participant was told about the purpose of
CHAPTER 3. METHOD 11

the study.
Participants in group A were presented with section 1 and 3 colour coded
while section 2 and 4 were without colour. Similarly participants in group
B were presented sections 2 and 4 colour coded and section 1 and 3 without
colour. Each image in the presentation was shown for the participants in dif-
ferent time spans, in proportion to the amount of text and content in the certain
slide in order for each participant to have the same time to learn and retain the
information on each image. To enforce this limitation the presentation was
pre-recorded. The presentation images, colour coded and without colour can
be found in Appendix A.
The questions were formulated to test the participants’ retention of both
important key words and their understanding of lecture content in general.
Both group A and B were given the same questions, but their answers were
divided, and analysed separately to compare results from colour coded and
non colour coded images. Participants were given approximately 10 questions
per section and had three minutes to fill out each form. All of these questions
can be found in Appendix B. The answers to each question was given a score
between 1 and 0, where 1 is correct and 0 is incorrect, some questions were
given a score of 0.5, meaning they had a partially correct answer. The only
exception to this was the last question of the third section of the test, where
instead each part of the question was graded 1 or 0 and the final score of this
question was the average of these parts. All the questions, answers and their
grading can be found in Appendix B. The grading was done in one sitting and
the same questions were always graded by the same person, so as to make the
grading equal for all answers.
After all experiments were performed the data gathered by the eye tracker
for each test group and image was compiled into one heat-map per image,
showing where on average each groups participants’ focus was directed. Ad-
ditionally the average gaze distance per time unit, e.g. the speed of the point
where the users gaze was directed, was calculated for both the coloured and
black and white images, respectively. This speed was then normalised depend-
ing on the number of total data points within coloured and black and white, so
as not to give a skewed result because one of the samples had more data. The
tests were performed on a screen with resolution 1920⇥1080, meaning that
a participant moving their gaze across the screen horizontally would result in
a traversed distance of 1920 pixels. The eye tracker can produce a different
amount of data points for different tests and different participants, as its read-
ing frequency is not constant and depends on external factors. The formula
used to get the normalised average is T ⇥ D/A, where T is the total gaze dis-
12 CHAPTER 3. METHOD

tance for colour respectively black and white, D is the number of data points
in the colour or black and white sample and A is the average number of data
points.

3.3 Materials
The experiment used an eye tracker of the model "Tobii eye tracker 4C", a re-
mote eye tracker using PCCR that is placed on the computer screen. It was
used to track the eye-movements of participants and collect the data which
could be compiled into heat maps and analysed in various ways after the ex-
periment. The eye tracker was attached to a laptop which the participants used
for watching the presentations as well as reading and filling in the questions.
Google Forms was used to present, answer and compile the answers for the
questionnaires.
A simple script was written in Python to allow the eye tracker to collect
individual data from each separate image in the presentation and another script
was written to analyse the data once all tests were concluded. For the extraction
of data from the eye tracker a program found on GitHub written by GitHub user
’commanderking’ was used by our script. This program, EyeTrackerWidget,
can be found on GitHub at github.com/commanderking/EyeTrackerWidget.
Other non-standard library packages that were used was ’pynput’, ’numpy’,
’pyplot’ which is a part of the ’matplotlib’ library and ’argparse’. ’pynput’
was used for reading input, this helped pause the main script until a certain
button was pressed. ’numpy’ and ’pyplot’ was used to create the heat maps
from the raw eye tracking data. Finally ’argparse’ was used to simplify the
code for passing command-line arguments to one script. Both these scripts
can be found in Appendix D.

3.4 Limitations
Limitations of the study that were identified prior to the experiments include
inaccuracy of the eye tracker, the number of participants as well as varying
difficulties of the different presentation sections. The eye tracker was tested
prior to the study and minor inaccuracies were noticed, especially some time
after calibration. This could affect the resulting heat maps generated after the
experiments and may cause slightly skewed and less accurate heat maps. How-
ever these inaccuracies should be consistent and similar for every user which
results in the gaze distance and speed remaining largely unaffected. The rel-
CHAPTER 3. METHOD 13

atively small number of participants compared to previous studies may also


introduce a skewed result as participants with better previous knowledge and
better understanding of the presented concepts may have a large impact on
the results. Due to this, the selection of participants were mainly from the first
and second years of the computer science programme. Additionally, after each
of the first two sections participants were asked if they knew about this data
structure previously. Finally another limitation of the study may be varying
difficulty of the different presentation sections, as this is difficult to test prior
to the experiments. Varying difficulty in the different sections of the presenta-
tion may affect the impact of colour coding as colour coding might have little
to no impact if the participant can not understand the content of a presentation.
This could result in low retention for both test groups and therefore little differ-
ence in between the groups. A pre-study was performed to minimise this and
feedback from that participant was used to calibrate viewing times for some
images and make some changes to the content of the study.
Chapter 4

Results

In this chapter the results of the survey and the analysed data from the eye
tracker will be presented. The first section 4.1 Survey Results presents the
results gathered from the questionnaires given during the experiments. The
second section 4.2 Heat Maps presents select heat maps generated from the
eye-tracking data. The third and final section 4.3 Eye Tracker Data presents
the results of analysing the eye tracking data.

4.1 Survey Results


Chart 1-4 presents the average amount of correct answers for each question
belonging to section 1 "Rope", 2 "Bloom Filter", 3 "Dictionary Library" and
4 "Dottidot Library" respectively. Chart 5 presents the average percentage of
correct answers for each section in the questionnaire. Each part of the chart
is divided into whether the participant was previously shown the presentation
images in colour or black and white. All the questions, answers and how they
were corrected can be found in Appendix B.
In section one, "Rope", participants of group A were shown the images in
colour. These participants scored on average 25% correct, while the partici-
pants of group B scored on an average of 42.86% correct. The Participants of
group B therefore scored on average 71.44% higher than participants of group
A. One participant in group B stated that they knew about this data structure
prior to the survey, however, this participant’s score was below average. Their
answers were kept while calculating the average correctness.

14
CHAPTER 4. RESULTS 15

Chart 1: Rope

Colour coded (Group A)


Average correctness 1 Black and White (Group B)
0.8

0.6

0.4

0.2

1 2 3 4 5 6 7
Question

In section two, "Bloom Filter", participants of group B were shown the


images in colour. These participants scored on average 80.47% correct, while
the participants of group A scored on average 50% correct. The participants
of group B therefore scored 60.94% higher than the participants of group A,
who were shown the black and white images. Two participants in group B
stated that they knew about this data structure prior to the survey, however,
these participants’ score was below average. Their answers were kept while
calculating the average correctness.

Chart 2: Bloom filter

Colour coded (Group B)


Black and White (Group A)
Average correctness

1
0.8
0.6
0.4
0.2
1 2 3 4 5 6 7 8
Question
16 CHAPTER 4. RESULTS

In section three, "Dictionary Library", participants of group A were shown


the image in colour. These participants scored on average 67.4% correct, while
the participants of group B scored on average 72.13% correct. The participants
of group B therefore scored on average 7.02% higher than those in group A.

Chart 3: Dictionary library

Colour coded (Group A)


Black and White (Group B)
Average correctness

1
0.8
0.6
0.4
0.2

1 2 3 4 5 6 7 8 9 10
Question

In section four "Dottidot Library", participants of group B were shown the


image in colour. These participants scored on average 80% correct, while the
participants of group A scored on average 72% correct. The participants of
group B therefore scored on average 11.1% higher than those in group A.

Chart 4: Dottidot library

Colour coded (Group B)


Black and White (Group A)
Average correctness

1
0.8
0.6
0.4
0.2

1 2 3 4 5 6 7 8 9 10
Question
CHAPTER 4. RESULTS 17

Chart 5: Total average for each section

Colour coded
Average correctness 1 Black and White

0.8

0.6

0.4

0.2

1 2 3 4
Section

Charts 6.1 & 6.2 presents whether the participants felt like the colours
helped or not, Group A and B respectively. In group A 62.5% found the colours
helpful and in group B the corresponding number was 87.5%. 37.5% of par-
ticipants in group A did not believe the colour made any difference, however
no one thought the colour was negative. In group B no one said that the colour
made no difference, yet one participant stated that the colour had a negative
impact on their memory of the images, this is equal to 12.5% of participants.
In total 75% of participants found the colours helpful, 18.75% did not think
they had any difference and 6.25% stated that it had a negative impact, pre-
sented in chart 6.3.

Chart 6.1: Group A

62.5% Yes it helped


No it was worse
It did not make any difference
37.5%
0%
18 CHAPTER 4. RESULTS

Chart 6.2: Group B

Yes it helped
87.5% No it was worse
0% It did not make any difference
12.5%

Chart 6.3: Group A & B

Yes it helped
75% No it was worse
It did not make any difference
18.75%
6.25%

Chart 7 presents the participants’ responses to if they found any section


more difficult. Generally participants found the sections that they were shown
in black and white more difficult. Participants that were shown the images of
section 2, "Bloom Filter", in colour were 75% less likely to respond that they
found the section more difficult. Similarly for section 3, "Dictionary Library",
participants that were shown this image in colour were 37.5% less likely to
respond that this section was more difficult. The exception to this is the other
two sections, where equally many participants from each group responded that
it was more difficult.
CHAPTER 4. RESULTS 19

Chart 7.2: Did you find any section more difficult to remember?

Colour coded
8 Black and White
7
6
5
4
3
2
1

Rope Bloom Dictionary Dottidot

4.2 Heat Maps


Figure 4.1-4.6 show the heat map of where participants’ gaze were directed
during the tests and the image they were looking at for three select images in
the different sections. Figure 1, 3 and 5 show the results for the participants
that were shown black and white images, while figure 2, 4 and 6 show the
corresponding for the participants that were shown coloured images. The red
spots indicate where participants’ gaze was directed during the entire time the
image was shown. Darker and more red dots signify that more participants
held their gaze at that point for a longer time. Heat maps for every image
shown to the participants can be found in Appendix C.
20 CHAPTER 4. RESULTS

Figure 4.1: The heat map over image 9 for participants that were shown this
image in black and white (Group B)

Figure 4.2: The heat map over image 9 for participants that were shown this
image in colour (Group A)
CHAPTER 4. RESULTS 21

Figure 4.3: The heat map over image 15 for participants that were shown this
image in black and white (Group A)

Figure 4.4: The heat map over image 15 for participants that were shown this
image in colour (Group B)
22 CHAPTER 4. RESULTS

Figure 4.5: The heat map over image 17 for participants that were shown this
image in black and white (Group B)

Figure 4.6: The heat map over image 17 for participants that were shown this
image in colour (Group A)
CHAPTER 4. RESULTS 23

4.3 Eye Tracker Data


After compiling and analysing the raw eye tracking data the results in table
4.1 were achieved. These results show that participants moved their gaze on
average 7,26% more when shown images in black and white than when shown
images in colour.

Table 4.1: A table of the results found when analysing eye tracking data
Colour Black and white
Total Data Points 2570146 2149717
Total Gaze Distance Covered 2590132.4 pixels 2323417.2
Average Gaze Speed Normalised All Sections 326.3 pixels/s 350.0 pixels/s
Average Gaze Speed Normalised Section 1 301.2 pixels/s 342.9 pixels/s
Average Gaze Speed Normalised Section 2 376.8 pixels/s 396.8 pixels/s
Average Gaze Speed Normalised Section 3 366.8 pixels/s 324.2 pixels/s
Average Gaze Speed Normalised Section 4 231.0 pixels/s 198.8 pixels/s
Chapter 5

Discussion

This chapter will start by discussing the findings in the previous chapter, this
is followed by a discussion on what implications this study has on previous
theory. After this some discovered limitations of this study will be discussed.
Finally some future work will be proposed.

5.1 Discussion of Results


In two out of the four tests the participants who were shown the coloured im-
ages had on average a higher score than those who were shown the black and
white counterparts, as can be seen in chart 5. These sections were section 2,
"Bloom Filter" where the difference in average score between the groups was
60.94% and section 4, "Dottidot library" where the difference in average score
was 11.1%. This indicates that the participants’ retention was facilitated by
the colour in the images to some extent. What these images had in common
was that the colouring of words was used conservatively in comparison to sec-
tion 1 and 3. This also indicates that colours might have a positive impact, but
only until a limit where they start becoming confusing for the reader, which
negatively impacts retention. This is in line with previous research conducted
by Ozcelik et al. [7] and Olurinola & Tayo [23] which indicates that colours
positively impacts retention. This is also in line with the research conducted
by Deubel et al. that shows that too many colours lead to cognitive overload
[16]. Additionally section 2, which had the highest difference in average score
between the two groups had a large number of questions related to it that asked
for specific key words in the presentation images. The large difference between
the groups in this particular section might suggest that highlighting key words
is especially helpful for remembering these key words, but less effective to

24
CHAPTER 5. DISCUSSION 25

facilitate the understanding of the general concepts in presentations. How-


ever, even if colours do not directly facilitate the learning of general concepts
they might indirectly positively impact this learning in a longer perspective, as
the student has a better memory of basic key words, which could make future
learning more effective.
Despite the highlighted key words, the group that were shown the presen-
tation images for sections 1, "Rope" and 3, "Dictionary Library" performed on
average worse that those who were shown the same images without colour. In
section 1 participant who were shown images in black and white scored on av-
erage 71.44% higher than those who were shown the same images with colour
coding, similarly in section 3 participants shown the black and white images
scored on average 7.02% higher. The large difference in section 1 might be
due to the fact that this section contained a lot of text and covered a quite com-
plex concept. Additionally this section used a lot of colours that were colour
coded to facilitate finding connected parts in the images and the text, while
also being consistent throughout the presentation. For example the function
"split" was coded in the same colour throughout many of the images. In total
9 colours were used in 11 images and one image contained at most 6 colours
at once. This however might instead have had an adverse effect on retention
as the multitude of colours instead causes cognitive overload and confuses the
reader. This is further strengthened by one participant who found section 1
to be harder than the others and who were shown this section in colour. The
participant stated in their comment on why they found it hard that

"Det var för mycket text och för mycket klotter på varje slide.
Medans dem andra var mindre text att läsa men mycket mer förk-
larande på ett enkelt sätt." ["There was too much text and too
much clutter on each slide. While the others had less to read, but
far more simplified in an easy way"].

A similar problem might be the case for section 3 as participants were confused
by the nonsensical nature of the information and the colour in the illustration
might have further increased their confusion. As can be seen in the heat maps
in figure 4.5 & 4.6, participants who were shown the images in colour spent
less time on reading the text and more time on the illustration, while partic-
ipants who were shown the images in black and white did the opposite. The
group that were shown this image in black and white therefore might have
spent more time in trying to remember as much as possible, instead of try-
ing to figure out what the illustration meant. The group that was shown the
colour coded image instead tried to understand the illustration, as the colour
26 CHAPTER 5. DISCUSSION

might have attracted their gaze more, which caused them to lose time. This
is further strengthened by the fact that the average gaze speed for students in
the group shown coloured images was higher than the other group for this sec-
tion, as seen in Table 4.1, because that might indicate that the students spent
more time moving their gaze around, rather than reading and taking in infor-
mation. As the groups were only given 90 seconds to watch this image this
might have caused the group who were shown the coloured image to retain
less information and so perform worse in the questions.
Another interesting finding of this study is the fact that participants that
were shown black and white images moved their gaze more on average. Espe-
cially in the first two tests students who were shown coloured images moved
their gaze less, suggesting that they spent less time looking for information and
more time reading and trying to understand. These first two tests also resem-
bled a regular lecture the most, with multiple slides and information on existing
data structures. If students spend less time searching for the relevant informa-
tion they can use the time more efficiently to learn and retain the information.
This can also be seen in the heat maps in figure 4.1-4.6, where the heat maps
are noticeably more dense for the groups that were shown coloured images,
especially around coloured key words. This suggests that a teacher or lecturer
who knows what they want to draw students’ attention to may favourably use
colour to point the students’ attention to where they want. Despite the differ-
ence in methodology and the studies analysing different data generated by the
eye tracker, this aligns with the results of the study conducted by Ozcelik et al.
[7], who concluded that it is easier to find relevant information connected to
images when shown in colour.
The majority of the participants, as presented in chart 6.3, stated that colour
was helpful when remembering the content of the different presentation slides.
However, the results did not reflect this, since a significantly improved result
by colour coded slides were limited to the section "Bloom filter". One possi-
bility is that the participants were given the illusion that it helped, as a result
of a placebo effect. Regardless, this result is of importance since if learning
appears to be easier and less frustrating, it can be more motivating and en-
courages learning further about the topic, which improves the overall learning
experience. Additionally it could be of importance in online lectures that the
content of presentations is presented in an easy to understand way, that is ap-
preciated by the student, as this might cause students to be more motivated
to watch online lectures. In contrast, online lectures that are harder to follow
might cause students to lose their focus due to distractions that may be present
wherever they are studying.
CHAPTER 5. DISCUSSION 27

5.2 Implications for theory and practice


As stated by previous research, eye tracking data provides objective data to
complement cognitive processes. Using eye trackers for research purposes is
a relatively new concept, and can be applied to more fields than analysing stu-
dents’ retention. The results are in line with the literature and the benefits of
colour coding, as well as the advice that colour coding should be used with cau-
tion to not confuse the reader. Additionally the results show that colour coding
is effective also when used in presentations about concepts within computer
science, when used sparsely and following the given guidelines in the back-
ground. Although the covid-19 pandemic forced a major increase of the usage
of online education, it is highly likely that it will influence education in the
future, resulting in blended learning becoming more common. This in con-
junction with the low completion rates of MOOCs, online education could
benefit from continuing the development according to the results of this study.

5.3 Limitations
Limitations to the study include the limited test group and the fact that par-
ticipants may have a greatly varying previous understanding of the concepts
presented in the experiment. Due to the test group being limited to a total of
16 students, which makes it unlikely to be able to draw any statistically con-
clusive conclusions, it was decided not to perform a statistical analysis of the
result. This, in comparison to the two similar studies conducted by Olurinola
& Tayo [23], who had 30 participants in their study, and Ozcelik et al. [7],
who had 52 participants in their study. On average, test group B scored higher
on every section in the questionnaire. This result may be derived from differ-
ent factors on an individual level, such as previous knowledge about similar
data structures or an overall better memory. However this could also be due
to the fact that group B were shown those images that were favoured by the
colour coding in colour and those that were negatively impacted by the colour
coding in black and white. This could make the content of the images easier
to understand and remember. Therefore, this could be a source of error and a
larger test group would give a more conclusive result.
28 CHAPTER 5. DISCUSSION

5.4 Future Work


This study was limited in several ways, due to its scope. As part of future
research this work could be expanded upon to account for more factors that
were disregarded. Some limitations to this study were that no statistical anal-
ysis was made, no psychological factors were taken into account and it was
limited to topics within computer science. Future research could expand on
the conclusions of this study by removing some of these limitations.
This study also gives some background to future research that could in-
vestigate how colour coding should be used most effectively and how colour
coding may impact students learning experience. This study indicates that
colour has a slightly positive effect on retention among students, however it
does not give any answer as to exactly how colour coding should be used to
maximise this positive effect. To investigate this could be the purpose of a
future study. This study gives some background to such a study, that colour
coding has some positive effect and it also gives some clues about which types
of colour coding seem most effective as it has had differing effectiveness in dif-
ferent parts of the experiment. Additionally this study indicates that students
spend less time searching for information in colour coded slides which could
also positively affect retention, however this was not investigated further and
a future piece of research could investigate this connection more. This study
also contains some results that may indicate that colour coding improved stu-
dents’ learning experience, however this could also be investigated more. For
example this study could provide some background to a study investigating
how colour coding should best be used to provide a good learning experience.
Chapter 6

Conclusions

To conclude the results of this study are slightly mixed, however a few impor-
tant conclusions can be drawn. Primarily, colours seem to have a positive im-
pact on retention when used sparsely. Section 2, "Bloom Filter", illustrates this
as participants who were shown coloured images performed significantly bet-
ter, which suggests better retention of the presentation’s content. This is in line
with previous research performed by Ozcelik et al. [7] and Olurinola & Tayo
[23]. However using too many different colours may instead have an adverse
effect as it instead confuses the reader and reduces retention, which is in line
with research by Deubel et al. [16]. Additionally participants shown colour
coded images moved their gaze less in the presentations that were most similar
to a traditional lecture format, which suggests that more time is spent reading
and learning than searching for information in the images. Even though this
study does not show that colour affects learning and understanding of concepts
positively, better retention of highlighted key words may be positive for learn-
ing in a longer perspective. This study also indicates that teachers and lecturers
can use colours in moderation to draw students’ gaze to certain points in their
presentations and that students perceive lectures that are colour coded as easier
to understand, which may increase their focus.

29
Bibliography

[1] Aabha Chaubey and Bani Bhattacharaya. “Learning Management Sys-


tem in Higher Education”. In: IJSTE - International Journal of Science
Technology & Engineering | Volume 2 | Issue 3 | September 2015 ISSN
(online): 2349-784X 2 (Sept. 2015), pp. 158–162.
[2] Amy Antonio and David Tuffley. “YouTube a valuable education tool,not
just cat videos”. In: The Conversation (Jan. 2015).
[3] Kimkong Heng and Koemhong Sol. “Online learning during COVID-
19: Key challenges and suggestions to enhance effectiveness”. In: (Dec.
2020).
[4] Lokanath Mishra, Tushar Gupta, and Abha Shree. “Online teaching-
learning in higher education during lockdown period of COVID-19 pan-
demic”. In: International Journal of Educational Research Open 1 (2020),
p. 100012. : 2666-3740. : https://doi.org/10.1016/
j.ijedro.2020.100012. : https://www.sciencedirect.
com/science/article/pii/S2666374020300121.
[5] Suzanne Young, Helen Nichols, and Ashley Cartwright. “Does Lecture
Format Matter? Exploring Student Preferences in Higher Education”.
In: Journal of Perspectives in Applied Academic Practice 8.1 (2020),
pp. 30–40. : 10 . 14297 / jpaap . v8i1 . 406. : https :
/ / www . researchgate . net / publication / 344062295 _
Does_ Lecture_ Format_ Matter_ Exploring_ Student_
Preferences_in_Higher_Education.
[6] R. Benjamin Hollis and Christopher A. Was. “Mind wandering, control
failures, and social media distractions in online learning”. In: Learning
and Instruction 42 (2016), pp. 104–112. : 0959-4752. : 10 .
1016 / j . learninstruc . 2016 . 01 . 007. : https : / /
doi.org/10.1016/j.learninstruc.2016.01.007.

30
BIBLIOGRAPHY 31

[7] Erol Ozcelik et al. “An eye-tracking study of how color coding affects
multimedia learning”. In: Computers & Education 53.2 (2009), pp. 445–
453. : 0360-1315. : 10 . 1016 / j . compedu . 2009 . 03 .
002. : https://www.sciencedirect.com/science/
article/pii/%20S0360131509000712.
[8] Anthony Picciano and Jeff Seaman. “K-12 online learning: A survey
of U.S. School district administrators”. In: Online Learning 11 (Jan.
2007). : 10.24059/olj.v11i3.1719.
[9] Daniel Onah, Jane Sinclair, and R Boyatt. “Dropout Rates of Massive
Open Online Courses: Behavioural Patterns”. In: July 2014. : 10.
13140/RG.2.1.2402.0009.
[10] Indira Dhull and Sakshi Arora. “Online Learning”. In: 3 (May 2019),
pp. 32–34.
[11] Pauline Muljana and Tian Luo. “Factors Contributing to Student Reten-
tion in Online Learning and Recommended Strategies for Improvement:
A Systematic Literature Review”. In: Journal of Information Technol-
ogy Education:Research 18 (Jan. 2019), pp. 19–57. : 10.28945/
4182.
[12] Scott A. Jensen. “In-Class Versus Online Video Lectures: Similar Learn-
ing Outcomes, but a Preference for In-Class”. In: Teaching of Psychol-
ogy 38.4 (2011), pp. 298–302. : 10.1177/0098628311421336.
: https://doi.org/10.1177/0098628311421336.
[13] Dimitrios Vlachopoulos. “COVID-19: Threat or opportunity for on-
line education?” In: Higher Learning Research Communication 10.1
(2020), pp. 16–19. : 10 . 18870 / hlrc . v10i1 . 1179. :
https://www.researchgate.net/publication/342127177_
COVID - 19 _ Threat _ or _ Opportunity _ for _ Online _
Education.
[14] Sonja Folker, Helge Ritter, and Lorenz Sichelschmidt. “Processing and
integrating multimodal material—the influence of color-coding”. In:
Proceedings of the annual meeting of the Cognitive Science Society.
Vol. 27. 27. 2005.
[15] Tanja Keller et al. “Information visualizations for knowledge acqui-
sition: The impact of dimensionality and color coding”. In: Comput-
ers in Human Behavior 22.1 (2006). Instructional Design for Effective
and Enjoyable Computer-Supported Learning, pp. 43–65. : 0747-
5632. : 10 . 1016 / j . chb . 2005 . 01 . 006. : https :
32 BIBLIOGRAPHY

/ / www . sciencedirect . com / science / article / pii /


S0747563205000105.
[16] Patricia Deubel. “An investigation of behaviorist and cognitive approaches
to instructional multimedia design”. In: Journal of educational multi-
media and hypermedia 12.1 (2003), pp. 63–90.
[17] Rick T Richardson, Tara L Drexler, and Donna M Delparte. “Color and
contrast in E-Learning design: A review of the literature and recommen-
dations for instructional designers and web developers”. In: MERLOT
Journal of Online Learning and Teaching 10.4 (2014), pp. 657–670.
[18] Pramodini Punde, Mukti Jadhav, and Ramesh Manza. “A study of eye
tracking technology and its applications”. In: Oct. 2017, pp. 86–90. :
10.1109/ICISIM.2017.8122153.
[19] João Antunes and Pedro Santana. “A Study on the Use of Eye Track-
ing to Adapt Gameplay and Procedural Content Generation in First-
Person Shooter Games”. In: Multimodal Technologies and Interaction
2.2 (May 2018), p. 23. : 2414-4088. : 10.3390/mti2020023.
: http://dx.doi.org/10.3390/mti2020023.
[20] Oleg äpakov and Darius Miniotas. “Visualization of eye gaze data using
heat maps”. In: ELEKTRONIKA IR ELEKTROTECHNIKA MEDICINE
TECHNOLOGY T 115 (Jan. 2007). : https://www.researchgate.
net/publication/228354465_Visualization_of_eye_
gaze_data_using_heat_maps.
[21] Teresa Busjahn et al. “Eye Tracking in Computing Education”. In: Pro-
ceedings of the Tenth Annual Conference on International Computing
Education Research. ICER ’14. Glasgow, Scotland, United Kingdom:
Association for Computing Machinery, 2014, pp. 3–10. : 9781450327558.
: 10.1145/2632320.2632344. : https://doi.org/
10.1145/2632320.2632344.
[22] Philip J. Guo, Juho Kim, and Rob Rubin. “How Video Production Af-
fects Student Engagement: An Empirical Study of MOOC Videos”. In:
Proceedings of the First ACM Conference on Learning @ Scale Con-
ference. L@S ’14. Atlanta, Georgia, USA: Association for Computing
Machinery, 2014, pp. 41–50. : 9781450326698. : 10.1145/
2556325 . 2566239. : https : / / doi . org / 10 . 1145 /
2556325.2566239.
BIBLIOGRAPHY 33

[23] Oluwakemi Olurinola and Omoniyi Tayo. “Colour in Learning: Its Ef-
fect on the Retention Rate of Graduate Students.” In: Journal of Edu-
cation and Practice 6.14 (2015), pp. 1–5.
[24] Quyin Fan. “The Effects of Beacons, Comments, and Tasks on Program
Comprehension Process in Software Maintenance”. AAI3422807. PhD
thesis. USA, 2010. : 9781124226545.
[25] Marta KoÊ-Januchta et al. “Visualizers versus verbalizers: Effects of
cognitive style on learning with texts and pictures – An eye-tracking
study”. In: Computers in Human Behavior 68 (2017), pp. 170–179.
: 0747-5632. : https://doi.org/10.1016/j.chb.
2016.11.028. : https://www.sciencedirect.com/
science/article/pii/S0747563216307695.
[26] Ana Isabel Molina et al. “Evaluating multimedia learning materials in
primary education using eye tracking”. In: Computer Standards Inter-
faces 59 (2018), pp. 45–60. : 0920-5489. : https://doi.
org/10.1016/j.csi.2018.02.004. : https://www.
sciencedirect.com/science/article/pii/S0920548917303392.
[27] Meng-Lung Lai et al. “A review of using eye-tracking technology in
exploring learning from 2000 to 2012”. In: Educational Research Re-
view 10 (2013), pp. 90–115. : 1747-938X. : https://doi.
org / 10 . 1016 / j . edurev . 2013 . 10 . 001. : https :
/ / www . sciencedirect . com / science / article / pii /
S1747938X13000316.
Appendix A

Presentations

Each group was shown every other section in colour and the other in black
and white. Images 1-11 are part of the first section, 12-16 the second, 17 is
the third section and 18 is the fourth section.

A.1 Group A
Group A were shown sections 1 & 3 in colour.

Fig. A.A.1 Image number 1 shown to group A.

34
APPENDIX A. PRESENTATIONS 35

Fig. A.A.2 Image number 2 shown to group A.

Fig. A.A.3 Image number 3 shown to group A.


36 APPENDIX A. PRESENTATIONS

Fig. A.A.4 Image number 4 shown to group A.

Fig. A.A.5 Image number 5 shown to group A.


APPENDIX A. PRESENTATIONS 37

Fig. A.A.6 Image number 6 shown to group A.

Fig. A.A.7 Image number 7 shown to group A.


38 APPENDIX A. PRESENTATIONS

Fig. A.A.8 Image number 8 shown to group A.

Fig. A.A.9 Image number 9 shown to group A.


APPENDIX A. PRESENTATIONS 39

Fig. A.A.10 Image number 10 shown to group A.

Fig. A.A.11 Image number 11 shown to group A.


40 APPENDIX A. PRESENTATIONS

Fig. A.A.12 Image number 12 shown to group A.

Fig. A.A.13 Image number 13 shown to group A.


APPENDIX A. PRESENTATIONS 41

Fig. A.A.14 Image number 14 shown to group A.

Fig. A.A.15 Image number 15 shown to group A.


42 APPENDIX A. PRESENTATIONS

Fig. A.A.16 Image number 16 shown to group A.

Fig. A.A.17 Image number 17 shown to group A.


APPENDIX A. PRESENTATIONS 43

Fig. A.A.18 Image number 18 shown to group A.

A.2 Group B
Group B were shown sections 2 & 4 in colour.

Fig. A.B.1 Image number 1 shown to group B.


44 APPENDIX A. PRESENTATIONS

Fig. A.B.2 Image number 2 shown to group B.

Fig. A.B.3 Image number 3 shown to group B.


APPENDIX A. PRESENTATIONS 45

Fig. A.B.4 Image number 4 shown to group B.

Fig. A.B.5 Image number 5 shown to group B.


46 APPENDIX A. PRESENTATIONS

Fig. A.B.6 Image number 6 shown to group B.

Fig. A.B.7 Image number 7 shown to group B.


APPENDIX A. PRESENTATIONS 47

Fig. A.B.8 Image number 8 shown to group B.

Fig. A.B.9 Image number 9 shown to group B.


48 APPENDIX A. PRESENTATIONS

Fig. A.B.10 Image number 10 shown to group B.

Fig. A.B.11 Image number 11 shown to group B.


APPENDIX A. PRESENTATIONS 49

Fig. A.B.12 Image number 12 shown to group B.

Fig. A.B.13 Image number 13 shown to group B.


50 APPENDIX A. PRESENTATIONS

Fig. A.B.14 Image number 14 shown to group B.

Fig. A.B.15 Image number 15 shown to group B.


APPENDIX A. PRESENTATIONS 51

Fig. A.B.16 Image number 16 shown to group B.

Fig. A.B.17 Image number 17 shown to group B.


52 APPENDIX A. PRESENTATIONS

Fig. A.B.18 Image number 18 shown to group B.


Appendix B

Questions & Survey Answers

B.1 Questions about lecture slides

Table B.1: Group A: Question 1


How do you define the weight of a node in a tree? Points
thesum of characters in the left subtree 0
Based on characeter idndex, unsure of exactly how.. 0
by the summed weight of the left subtree 0
sum of leaf nodes to the left 0.5
Number of characters attached to the string 0
No clue 0
The length of the word 0
sum of length 0

53
54 APPENDIX B. QUESTIONS & SURVEY ANSWERS

Table B.2: Group A: Question 2


How do you Insert a new string into a rope? Points
you split the tree once andthen insert the new subtree, and
concat twice by first spliting the rope, then concating with 1
the string to be inserted then concating the halves togheter
by splitting the rope and then concatenating twice, once for
1
the new string, and once more to join the tree
split it here ou want to insert and then concat with the thing
1
twice
NO 0
Split it then concanate 2 nodes 1
You split and then concatenate 0.5
split old rope then concatenate new string 0.5

Table B.3: Group A: Question 3


What is the last step during a split operation? Points
thesum of characters in the left subtree 0
Based on characeter idndex, unsure of exactly how.. 0
by the summed weight of the left subtree 0
sum of leaf nodes to the left 0.5
Number of characters attached to the string 0
No clue 0
The length of the word 0
sum of length 0
you split the tree once andthen insert the new subtree, and
1
concat twice
by first spliting the rope, then concating with the string to be
1
inserted then concating the halves togheter
by splitting the rope and then concatenating twice, once for
1
the new string, and once more to join the tree
split it here ou want to insert and then concat with the thing
1
twice
NO 0
Split it then concanate 2 nodes 1
You split and then concatenate 0.5
split old rope then concatenate new string 0.5
APPENDIX B. QUESTIONS & SURVEY ANSWERS 55

Table B.4: Group A: Question 4


In the Index operation, during the binary search, what is the
Points
next step if i is larger than the weight of the current node?
go to the right subtree with (i-w)
step left 0
to recursively call the search on the right subtree 0.5
check to the right 0.5
Head left 0
No idea 0
Go to the left node of the node checked 0
move right search for i 0.5

Table B.5: Group A: Question 5


How do you perform a concatenation of two ropes? Points
you add a new rope node with the two subtrees as children 1
dont know 0
by adding a new root node with the two ropes as children 1
don’t remember 0
add them were weight matches 0
No idea 0
Create a root node and count the weight and then create a
1
new rope
NO 0

Table B.6: Group A: Question 6


Which operations run faster on a rope than on a monolithic
Points
array string?
all except index 0.5
NO 0
Splitting and concatenating 0
?? 0
searching for a string at a certain index 0
I dont know what monolithic array string is. 0
Comparing 0
concatenate, insert, delete 1
56 APPENDIX B. QUESTIONS & SURVEY ANSWERS

Table B.7: Group A: Question 7


Which operations run slower on a rope than on a monolithic
Points
array string?
? 0
NO 0
searching by index 0.5
?? 0
NO 0
Same answer 0
NO 0
NO 0

Table B.8: Group A: Question 8


Did you know about this data structure prior to this survey?
No
No
No
No
No
No
No
No

Table B.9: Group A: Question 9


What is it called when an element is said to be present when
Points
it actually isn’t?
false positive 1
false positive 1
a false positive 1
false positive 1
false positive 1
No clue 0
False positive result 1
false positive 1
APPENDIX B. QUESTIONS & SURVEY ANSWERS 57

Table B.10: Group A: Question 10


What is it called when an element is said to be absent when
Points
it actually isn’t?
? 0
false negative 1
a false negative 1
false negative 1
false positive 1
No clue 0
Fake news 0
false negative 1

Table B.11: Group A: Question 11


What is the main advantage of the bloom filter? Points
it is efficient 1
fast and low memory requierment 1
it is fast 1
faster to check whether somethings present 1
More efficient 1
You will alway know if the username is taken 0
Fast respose if a word exists 1
space efficiency? 1

Table B.12: Group A: Question 12


How do we test if an element is present? Points
we use the hash functon and check wheter the index is set to
1
one. If all are set to one the element is probably present
we check if the elements hash has the same modelo as a pre-
vious element with multiple hash functions by checking if 1
every bit returned by the hash functions is 1
we hash it and then check if those positions are all 1, if so,
1
it is present
check the indexing 0
Look at the bits if its set to 1 or not 0.5
We use hashfunctions to see if letters max to indexies and
1
have the number 1 in it
run the hash functions and check if all bits are 1 1
58 APPENDIX B. QUESTIONS & SURVEY ANSWERS

Table B.13: Group A: Question 13


Why can’t elements be deleted from a Bloom Filter? Points
you might delete indexes set to one from other elements 1
because multiple elements could be deleted 1
because deleting one might lead to others being deleted 1
because we don’t know if other elements share some of the
1
hashes, wo we cant just delete all of them
You might delete more strings than wanted 1
We could also delete the other usernames by set a bit to 0 1
Because we can delete other words aswell 1
we could delete some other username 1

Table B.14: Group A: Question 14


What is the false positive probability called? Points
? 0
error rate 0.5
false error rate 1
? 0
NO 0
No clue 0
epsilon 0.5
NO 0

Table B.15: Group A: Question 15


How can the probability of False positive results be de-
Points
creased and why does this work?
By increasing the size of m and the number of hash functions 1
larger filter or more hashong functions 1
by increasing the size and number of hash functions. it
works because the likelyhood of all hash functions return- 1
ing occupied positions is lowered
I think increase k=number of hash functions 0.5
NO 0
No clue 0
By having a bigger m-value 0.5
increase size of m 0.5
APPENDIX B. QUESTIONS & SURVEY ANSWERS 59

Table B.16: Group A: Question 16


What is the negative trade-off when we decrease the proba-
Points
bility of false positive results?
more space is taken up and it is slower 1
NO 0
it’s slower or takes more space 1
takes more time to do all the hashes 0.5
Take more memory 0.5
No clue 0
Bloom filter takes up more space. 0.5
More space requirement 0.5

Table B.17: Group A: Question 17


Did you know about this data structure prior to this survey?
No
No
No
No
No
No
No
No

Table B.18: Group A: Question 18


proclamate Points
Causes the elements to interact, the argument decides which
1
element dominates
causes an interaction between A and B 1
-it causes A and B to interact, with the given argument dom-
1
inating the interaction
a and b interact, the argument is which one dominates 1
Causes A and B to interact 1
Something between A and B 0.5
A and B interact 1
Makes A and B interact 1
60 APPENDIX B. QUESTIONS & SURVEY ANSWERS

Table B.19: Group A: Question 19


Number of arguments for proclamate Points
1 1
0 0
1 1
1 1
2 0
2 0
0 0
1 1

Table B.20: Group A: Question 20


rue Points
Causes two elements to switch position 0.5
Switches A and B 1
-swaps A and B 1
switch a and b 1
Switch place of A and B 1
Switch A and B 1
A and B switch positions 1
switches place of A and B 1

Table B.21: Group A: Question 21


Number of arguments for rue Points
0 0
0 0
2 1
0 0
2 1
0 0
0 0
2 1
APPENDIX B. QUESTIONS & SURVEY ANSWERS 61

Table B.22: Group A: Question 22


gerundive Points
Sets the name 1
generates C based on A and B 0
-sets the name of the scene to the ’cause’ argument 1
change name of scene, argument is the new name 1
The name of the system 0.5
name of the set 0.5
NO 0
change scene name 1

Table B.23: Group A: Question 23


Number of arguments for gerundive Points
1 1
1 1
1 1
1 1
0 0
1 1
1 1
1 1

Table B.24: Group A: Question 24


starfruit Points
Resets the elements 1
Resets A B and C 1
-resets A B and C 1
reset a, b, and c 1
Changes all values back to their original form 1
reset ABC 1
Calculate A and B and send to output depending on where 0
resets all actors 1
62 APPENDIX B. QUESTIONS & SURVEY ANSWERS

Table B.25: Group A: Question 25


Number of arguments for starfruit Points
0 1
0 1
0 1
0 1
0 1
0 1
1 0
0 1

Table B.26: Group A: Question 26


hereto Points
The state of the element? 0
Moves C to argument and prints new position to output 0.5
-does something, the argument decides whether or not it is
0.5
printed to the output
generate c based on a and b, argument is whether to print 1
Returns the shortest path from A to B 0
Gives the outout of C 0.5
Decides if which one of A and B are the dominating one 0
gives C a state depending on A and B 1

Table B.27: Group A: Question 27


Number of arguments for hereto Points
1 1
1 1
1 1
1 1
2 0
0 0
2 0
1 1
APPENDIX B. QUESTIONS & SURVEY ANSWERS 63

Group A: Question 28; What argument names belong to which function

giver
person
cause
8 where
number

proclamate rue gerundive starfruit hereto

Table B.28: Correct Answers: What argument names belong to which func-
tion
function argument name grading
correct row: 1 pt, one extra argument: 0.5
proclamate giver
pt
correct row: 1 pt, one extra argument: 0.5
rue number, person
pt
gerundive cause correct row: 1 pt
starfruit correct row: 1 pt
hereto where correct row: 1 pt
64 APPENDIX B. QUESTIONS & SURVEY ANSWERS

Table B.29: Group A: Question 29


Apple Points
Reverse order of string 1
reverse the string 1
it reverses a string 1
reverse the string 1
All letter to Lowercase 0
Reversed the string 1
Reversing the string 1
reverse the string 1

Table B.30: Group A: Question 30


Number of arguments for Apple Points
0 1
0 1
1 0
0 1
0 1
1 0
1 0
0 1

Table B.31: Group A: Question 31


Bubble Points
All characters will be lowercase 1
sets all characters to lower case 1
it lowercases a string 1
make it all lower case 1
NO 0
Set all to lowercase 1
Makes all characters into lowercase 1
lowercase all characters in string 1
APPENDIX B. QUESTIONS & SURVEY ANSWERS 65

Table B.32: Group A: Question 32


Number of arguments for Bubble Points
0 1
0 1
1 0
0 1
0 1
0 1
1 0
0 1

Table B.33: Group A: Question 33


Cloud Points
Set all characters ch to uppercase 1
Removes all occurances of agrument character 0
it capitalises each ’ch’ in a string 1
turn all occurences of a char into upper case, arguemtn is a
1
char , the one to change
NO 0
Removed the char that ch had 1
Makes ch-characters into uppcase. 1
capitalize all occurances of given char ch 1

Table B.34: Group A: Question 34


Number of arguments for Cloud Points
1 1
1 1
1 1
1 1
0 0
1 1
1 1
1 1
66 APPENDIX B. QUESTIONS & SURVEY ANSWERS

Table B.35: Group A: Question 35


Doodle Points
Replace whitespace with dots 1
Replaces all whitespace with dots 1
it replaces whitespace with dots in a string 1
replace whitespace with dots 1
Removes all whitespaces and dots 0.5
Removed something with dots. 0
NO 0
replace whitespace with dots 1

Table B.36: Group A: Question 36


Number of arguments for Doodle Points
0 1
0 1
1 0
0 1
0 1
0 1
1 0
0 1

Table B.37: Group A: Question 37


Echo Points
Number of arguments for Echo Points
switch to characters at indices specified 1
Switch characters at index a and b (int arguments) 1
it swaps places of two characters at index a and b (integers) 1
swap chars at two positions, given as integers 1
Switches places of two strings 0.5
Switch char from to intergers 1
Swaps elements A and B 1
switch place of letters at two integer positions a, b 1
APPENDIX B. QUESTIONS & SURVEY ANSWERS 67

Table B.38: Group A: Question 38


Number of arguments for Echo Points
2 1
2 1
2 1
2 1
2 1
2 1
2 1
2 1

Table B.39: Group B: Question 1


How do you define the weight of a node in a tree? Points
The weight of the leaves on the left node. 1
it’s the length of the strings in the leaf nodes at the left sub-
1
tree
for the leaf nodes its the number of characters in the string,
0
and for the others its index based?
The number of elements/objects in its leftsubtree 0.5
The number of leaf children in the node 0
I dont remember 0
weight/length of the left child to the node 0
The amount of characters in as the sum of traversing to the
0
left most leaves
68 APPENDIX B. QUESTIONS & SURVEY ANSWERS

Table B.40: Group B: Question 2


How do you Insert a new string into a rope? Points
Cut where you want to insert the new string, then do 2 con-
1
catinations.
You have to split once and concentate three times. Splitting
them enables them to add a new root and a right subtree, to 0.5
lastly add them back to the original rope.
no clue, but since its a string I would assume it’s at the "end"
0
of the tree (to the right)
Split at the appropriate position, then place it there, lastlye
1
concatinate the three segments
Cut one branch and concatenate new leaves adding a parent
0
node to both children
By first splitting the string into two trees, placing the new
1
string inbetween and concatenating
1 split and then 2 concatination 1
split and the concatenate twice 1

Table B.41: Group B: Question 3


What is the last step during a split operation? Points
Connecting all of the orphaned nodes. 1
adding the loose nodes back to the cut off tree 1
can’t spell it but mergin the loose leafs back into a tree 1
look for orphans and bind them 1
Concatenation 0
making sure all split nodes are under/insida a tree 0.5
adding the cut out part to a new independent rope 0
maybe binding the leaf that is broken away from the tree? 0.5
APPENDIX B. QUESTIONS & SURVEY ANSWERS 69

Table B.42: Group B: Question 4


In the Index operation, during the binary search, what is the
Points
next step if i is larger than the weight of the current node?
Search the right node for index (w - i) 1
Search the right subtree 0.5
move right 0.5
go to the right 0.5
Move to the right branch and continue searching down that
0.5
tree
Go right 0.5
look in the right side of the rope 0.5
go left in the tree 0

Table B.43: Group B: Question 5


How do you perform a concatenation of two ropes? Points
Create a new node and calculate its weight. 0.5
add a new upper root node and add one of them as left sub-
0.5
treeand the other as the right
check their values i assume 0
cut out the middle part (two cuts) then conncatinare 0
Cut once and concatenate twice 0
By placing both as child nodes under a new node 1
.. 0
simply add another root above and then bind it two the roots
1
of the two ropes

Table B.44: Group B: Question 6


Which operations run faster on a rope than on a monolithic
Points
array string?
Delete, insert, search 0.5
Split 0.5
no clue 0
NO 0
Split operations 0
Concatenation, 0.5
concati 0.5
concatenation and splitting? 0
70 APPENDIX B. QUESTIONS & SURVEY ANSWERS

Table B.45: Group B: Question 7


Which operations run slower on a rope than on a monolithic
Points
array string?
NO 0
NO 0
also no clue 0
NO 0
Linear operations 0
index, delete, 0.5
search 0.5
reading from a specific index 0.5

Table B.46: Group B: Question 8


Did you know about this data structure prior to this survey?
Yes
No
No
Yes
Yes
No
No
No

Table B.47: Group B: Question 9


What is it called when an element is said to be present when
Points
it actually isn’t?
False positive 1
false positive 1
false-positive 1
False positive 1
false positive error 1
False positive 1
false positive 1
false positive 1
APPENDIX B. QUESTIONS & SURVEY ANSWERS 71

Table B.48: Group B: Question 10


What is it called when an element is said to be absent when
Points
it actually isn’t?
False negative 1
false negative 1
false-negative 1
False negative 1
false negative error 1
false negative 1
false negative 1
false negative 1

Table B.49: Group B: Question 11


What is the main advantage of the bloom filter? Points
Efficient 0.5
It’s a way of probalistically determining if something is there
1
or not. Much more space than a hashtable and faster.
efficient, can store a lot of data and operate on it quickly 1
Efficency, space effective 1
its space efficient and fast 1
Space efficiency 0.5
space efficient 0.5
no false negatives 0.5
72 APPENDIX B. QUESTIONS & SURVEY ANSWERS

Table B.50: Group B: Question 12


How do we test if an element is present? Points
Check if all the hashes are set to 1 1
process the element through several hash functions and
1
check if all these positions in the set is set as 1
evaluate the k number of hash codes, see if they have a value
1
of 1 or 0, if they all have 1 its likely present
Rune the hash function to obtain indicies, then check to see
1
if all are 1
we hash the value of our string and see if the bit in the index
1
is set to 1
By running our hash functions and checking if ALL the re-
1
cieved indicies are set to 1
look if all hashes returns 1 1
check if the hashes for the element all hold the value 1 1

Table B.51: Group B: Question 13


Why can’t elements be deleted from a Bloom Filter? Points
Because one of the ones may be used for multiple elements 1
as it could delete an existing element that is supposed to be
1
present in the set
because you may accidentally delete another another ele-
1
ment, as they may share hash codes
Deletion can affect other elements 1
because multiple hash inputs can generate the same index
1
value causing overlap for some strings
Because we might delete the result for some other element 1
i may delete other elements. One element corresponds to
1
multiple bits
because it might delete something else too 1
APPENDIX B. QUESTIONS & SURVEY ANSWERS 73

Table B.52: Group B: Question 14


What is the false positive probability called? Points
NO 0
false error rate 1
epsilon? or the e symbol 0.5
error rate 0.5
NO 0
Dont know 0
error rate 0.5
the greek letter for e (epsilon?) 0.5

Table B.53: Group B: Question 15


How can the probability of False positive results be de-
Points
creased and why does this work?
Increasing m (the size) since that will give more possible
0.5
hashes
increasing the size of the set and the amount of hash func-
1
tions. That way elements won’t concede as much.
increasing the number of hash codes (k), but its then slower,
0.5
or increasing the size of available hash codes
increasing the nubmer of hashes, and the number of bits,
less likley to coincide by increasing the size of the array and
1
number of hash functions, allowing a larger range of opera-
tion
By increasing the size of the array and the number of hash
1
functions
more hash functions 0.5
by increasing the size of the array. Because there will be
0.5
more places for the hash functions to place the "ones"
74 APPENDIX B. QUESTIONS & SURVEY ANSWERS

Table B.54: Group B: Question 16


What is the negative trade-off when we decrease the proba-
Points
bility of false positive results?
NO 0
it takes up much more space and time to run the operations 1
it’s slower and takes more space 1
takes up more space 0.5
NO 0
We loose some space efficiency and also increase the num-
1
ber of operations if we use more hash functions
.. 0
More space and computation time required 1

Table B.55: Group B: Question 17


Did you know about this data structure prior to this survey?
No
No
No
No
No
No
Yes
No

Table B.56: Group B: Question 18


proclamate
NO 0
it interacts a and b where the giver is the more dominant one 1
the interaction one? 0.5
causes A and B to interact 1
makes A and B interact 1
Swap A and B 0.5
makes a and b interact with each other 1
makes A and B interact, dominant one depends on the argu-
1
ment
APPENDIX B. QUESTIONS & SURVEY ANSWERS 75

Table B.57: Group B: Question 19


Number of arguments for proclamate Points
1 1
1 1
2 0
1 1
1 1
2 0
1 1
1 1

Table B.58: Group B: Question 20


rue Points
Switch A and B 1
switches the position between A and B 1
switch A and B? 1
Switch places 0.5
switches position of A and B 1
NO 0
switches a and b 1
switches position of A and B 1

Table B.59: Group B: Question 21


Number of arguments for rue Points
2 1
2 1
1 0
2 1
1 0
2 1
2 1
0 0
76 APPENDIX B. QUESTIONS & SURVEY ANSWERS

Table B.60: Group B: Question 22


gerundive Points
Sets the name of the scene to the argument 1
something scene 0.5
nope 0
Set the name of the scene to the argument 1
sets C to value of A and B 0
NO 0
dont know 0
changes the name of something 0.5

Table B.61: Group B: Question 23


Number of arguments for gerundive Points
1 1
1 1
1 1
1 1
1 1
1 1
2 0
1 1

Table B.62: Group B: Question 24


starfruit Points
Resets A, B and C 1
resets the position A, B and C 0
reset A, B and C? 1
reset A B C 1
resets everything A B and C 1
Resets A and B 1
resets all elements 1
resets A B and C 1
APPENDIX B. QUESTIONS & SURVEY ANSWERS 77

Table B.63: Group B: Question 25


Number of arguments for starfruit Points
2 0
0 1
3 0
0 1
0 1
0 1
0 1
0 1

Table B.64: Group B: Question 26


hereto Points
Generates C based on A and B. The argument determines
1
whether to print the result
decides if it should print the position/state of A, B and C 0.5
print and where? 0.5
NO 0
sets the scene name 0
NO 0
prints results 0.5
outputs C depending on A and B. The argument decides if
1
there should be an output or not

Table B.65: Group B: Question 27


Number of arguments for hereto Points
1 1
1 1
1 1
1 1
1 1
1 1
1 1
1 1
78 APPENDIX B. QUESTIONS & SURVEY ANSWERS

Group B: Question 28; What argument names belong to which function

giver
person
cause
8 where
number

proclamate rue gerundive starfruit hereto

Table B.66: Correct Answers: What argument names belong to which func-
tion
function argument name grading
correct row: 1 pt, one extra argument: 0.5
proclamate giver
pt
correct row: 1 pt, one extra argument: 0.5
rue number, person
pt
gerundive cause correct row: 1 pt
starfruit correct row: 1 pt
hereto where correct row: 1 pt
APPENDIX B. QUESTIONS & SURVEY ANSWERS 79

Table B.67: Group B: Question 29


Apple Points
Reverses the string 1
replace all whitespace with dots 0
Reverse the array 1
Reverse the string 1
reverses a string 1
Reverse the string 1
reverses a string 1
Reverses string 1

Table B.68: Group B: Question 30


Number of arguments for Apple Points
0 1
0 1
0 1
0 1
0 1
0 1
0 1
0 1

Table B.69: Group B: Question 31


Bubble Points
Makes the string lowercase 1
reverse the string 0
make the string all lower case 1
Replace spaces with dots 0
lowercases everything in a string 1
Set string to lowercase 1
replaces whitespace with dots 0
makes everything lowercase 1
80 APPENDIX B. QUESTIONS & SURVEY ANSWERS

Table B.70: Group B: Question 32


Number of arguments for Bubble Points
0 1
0 1
0 1
0 1
0 1
0 1
0 1
0 1

Table B.71: Group B: Question 33


Cloud Points
Capitalizes all instances of the character ch 1
replace character at position integer a to integer b 0
Make all instances of a specific upper case character capi-
1
talized
take an argument, and make all occurences of it uper case 1
Sets the argument ch to capital letters 1
Set all occurences of ch to uppercase 1
switches index a and b 0
captilasizes all occurences of the character argument 1

Table B.72: Group B: Question 34


Number of arguments for Cloud Points
1 1
2 0
1 1
1 1
1 1
1 1
2 0
1 1
APPENDIX B. QUESTIONS & SURVEY ANSWERS 81

Table B.73: Group B: Question 35


Doodle Points
Replaces all whitespaces with dots 1
NO 0
replace space with a dot 1
Make the entire string lower case 0
sets all whitespace to dots 1
Replace whitespace with dots 1
capitalize character ch 0
changes whitespaces to dots 1

Table B.74: Group B: Question 36


Number of arguments for Doodle Points
0 1
0 1
0 1
0 1
0 1
0 1
1 0
0 1

Table B.75: Group B: Question 37


Echo Points
Switches the character at index a with the character at index
1
b
NO 0
Replace two characters(???) with each other 1
Swap positions of two elements (with indicies as arguments) 1
takes two integer parameters a,b and swaps them 1
Swap the position of index a and b 1
lowercase all 0
Swaps the characters on the indexes of the two integer argu-
1
ments
82 APPENDIX B. QUESTIONS & SURVEY ANSWERS

Table B.76: Group B: Question 38


Number of arguments for Echo Points
2 1
0 0
2 1
2 1
2 1
2 1
0 0
2 1
APPENDIX B. QUESTIONS & SURVEY ANSWERS 83

B.2 Meta questions


Did you feel like the colours in some of the presented slides helped you
remember the contents of the slide?
Group A

62.5% Yes it helped


No it was worse
It did not make any difference
37.5%
0%

Group B

Yes it helped
87.5% No it was worse
0% It did not make any difference
12.5%

Did you find any section more difficult to remember?

6 Colour coded
Black and White
5

Rope Bloom Dictionary Dottidot


84 APPENDIX B. QUESTIONS & SURVEY ANSWERS

If you answered yes on the previous question, please state why:

Table B.77: Group A


rope felt like more info to remember, more slides. dictionary felt less
interesting
"On the presentations with multiple slides one could not refer back to
information when it was refered to later. Many aspects of rope were very
confusing to me, since i missed how the node weights were determined.
Bloom was somewhat easier than Rope, but I feel more comfortable with
that sort of data type overall."
Rope was the most complex with the most possible interactions, and the
Dictionary Library seemed constructed to confuse the user with unintu-
itive names and arguments
I think I was focused on understanding the core concept, so completely
ignored the time complexity
The method names didn’t represent what the method did.
Det var för mycket text och för mycket klotter på varje slide. Medans
dem andra var mindre text att läsa men mycket mer förklarande på ett
enkelt sätt.
The name of different functions made no sense to the actions itself.
The presentations were longer, making it harder to remember it all
APPENDIX B. QUESTIONS & SURVEY ANSWERS 85

Table B.78: Group B


Everything feelt random so nothing to latch onto. Nothing to understand,
just remembering random names.
I somehow blanked out all information except all the different colours
so I couldn’t remember which function corresponded to which method
signature.
Rope was difficult as there were many words I didn’t have prior expe-
rience with, so part of the time was spent trying to just understand the
meaning of the words. There was also so much text, it felt like I was just
thrown a bunch of words. The dictionary library just had me 100% con-
fused, I in all honesty understood no part of it, but I respect the attempt
at a visual guide, unfortunately I didn’t get much from it at all.
The dictionary library was difficult since the names were very unitua-
tive, and the operations felt abstract. With the dottidot, i could at least
understand the procedure. Bloom and rope were both new to me, and
part of them was diificult. Rope hade so much data to remember, like
the time complexity for everyting.
Bloom was easier, but more unituative for me
"The Rope section felt very foreign with many tiny details and vocabu-
lary to remember while still having similarities between some concepts
making it easy to mix up and get it wrong.
The Bloom section was slightly easir than Rope in comparison however
it still had some vocabulary and details that could get mixed up unless
one has a proper understanding
The Dictionary Library had names that made no sense and it was a mat-
ter of rote memorization in order to understand what the functions were
doing and getting the parameters right. It is worth noting that it made
the Dottidot Library slightly easier because I understood from the Dic-
tionary Library that I had to adjust my learning strategy for the task."
They felt less intuitive and more confusing. I did not really know what
i was looking at and some things were not explained properly or at all.
The colors did help but i also feel like the content was better for those
with color.
Rope: a lot of new information about a dataset i never heard of. A bit
too much text on each slide for me.
Rope. Due to the large amounts of barely whitespaced/formatted text.
Dictionary because the picture, the names of the functions and their ar-
guments and the weird formatting made it very hard to read and remem-
ber
86 APPENDIX B. QUESTIONS & SURVEY ANSWERS

Do you have any reading difficulties (dyslexia etc)?


Group A

No
100% 0% Yes

Group B

No
87.5%
Yes
12.5%

Are you colour blind?


Group A

No
100% 0% Yes

Group B

No
87.5%
Yes
12.5%
Appendix C

Heat Maps

C.1 Heat maps, coloured images

Fig. C.1.1 Heat map for participants shown image 1 in colour.

87
88 APPENDIX C. HEAT MAPS

Fig. C.1.2 Heat map for participants shown image 2 in colour.

Fig. C.1.3 Heat map for participants shown image 3 in colour.


APPENDIX C. HEAT MAPS 89

Fig. C.1.4 Heat map for participants shown image 4 in colour.

Fig. C.1.5 Heat map for participants shown image 5 in colour.


90 APPENDIX C. HEAT MAPS

Fig. C.1.6 Heat map for participants shown image 6 in colour.

Fig. C.1.7 Heat map for participants shown image 7 in colour.


APPENDIX C. HEAT MAPS 91

Fig. C.1.8 Heat map for participants shown image 8 in colour.

Fig. C.1.9 Heat map for participants shown image 9 in colour.


92 APPENDIX C. HEAT MAPS

Fig. C.1.10 Heat map for participants shown image 10 in colour.

Fig. C.1.11 Heat map for participants shown image 11 in colour.


APPENDIX C. HEAT MAPS 93

Fig. C.1.12 Heat map for participants shown image 12 in colour.

Fig. C.1.13 Heat map for participants shown image 13 in colour.


94 APPENDIX C. HEAT MAPS

Fig. C.1.14 Heat map for participants shown image 14 in colour.

Fig. C.1.15 Heat map for participants shown image 15 in colour.


APPENDIX C. HEAT MAPS 95

Fig. C.1.16 Heat map for participants shown image 16 in colour.

Fig. C.1.17 Heat map for participants shown image 17 in colour.


96 APPENDIX C. HEAT MAPS

Fig. C.1.18 Heat map for participants shown image 18 in colour.

C.2 Heat maps, black and white images

Fig. C.2.1 Heat map for participants shown image 1 in black and white.
APPENDIX C. HEAT MAPS 97

Fig. C.2.2 Heat map for participants shown image 2 in black and white.

Fig. C.2.3 Heat map for participants shown image 3 in black and white.
98 APPENDIX C. HEAT MAPS

Fig. C.2.4 Heat map for participants shown image 4 in black and white.

Fig. C.2.5 Heat map for participants shown image 5 in black and white.
APPENDIX C. HEAT MAPS 99

Fig. C.2.6 Heat map for participants shown image 6 in black and white.

Fig. C.2.7 Heat map for participants shown image 7 in black and white.
100 APPENDIX C. HEAT MAPS

Fig. C.2.8 Heat map for participants shown image 8 in black and white.

Fig. C.2.9 Heat map for participants shown image 9 in black and white.
APPENDIX C. HEAT MAPS 101

Fig. C.2.10 Heat map for participants shown image 10 in black and white.

Fig. C.2.11 Heat map for participants shown image 11 in black and white.
102 APPENDIX C. HEAT MAPS

Fig. C.2.12 Heat map for participants shown image 12 in black and white.

Fig. C.2.13 Heat map for participants shown image 13 in black and white.
APPENDIX C. HEAT MAPS 103

Fig. C.2.14 Heat map for participants shown image 14 in black and white.

Fig. C.2.15 Heat map for participants shown image 15 in black and white.
104 APPENDIX C. HEAT MAPS

Fig. C.2.16 Heat map for participants shown image 16 in black and white.

Fig. C.2.17 Heat map for participants shown image 17 in black and white.
APPENDIX C. HEAT MAPS 105

Fig. C.2.18 Heat map for participants shown image 18 in black and white.
Appendix D

Python Code

D.1 Code to collect data

collect.py

import subprocess
import os
from pynput import keyboard
import time

# Event on key released, if the button released is pause


/break - returns false
# which stops the listener and returns control to the
main process
def on_release(key):
if key == keyboard.Key.pause:
return False

# Runs in a loop, collect keyboard events until released


(wait until we want to start the next test)
def main():
with keyboard.Listener(
on_release=on_release) as listener:
listener.join()

106
APPENDIX D. PYTHON CODE 107

# Start the keyboard listener, that will detect when we


want to start collecting data
def start_listener():
keyboard.Listener.start
main()

# Start the subprocess that collects data from the eye-


tracker
def gather_eye_data(sleep_time, i):
proc = subprocess.Popen("UserPresenceWpf.exe")
time.sleep(sleep_time - 1)
proc.kill()
time.sleep(0.5)
os.rename(r"Output/gazeDataOutput.csv", r"Data/output
" + str(i) + ".csv")
time.sleep(0.5)

# Wait until pause/break is pressed and then collect


data for each slide in each test
if __name__ == ’__main__’:
start_listener()
gather_eye_data(120, 1)
gather_eye_data(60, 2)
gather_eye_data(30, 3)
gather_eye_data(30, 4)
gather_eye_data(30, 5)
gather_eye_data(30, 6)
gather_eye_data(30, 7)
gather_eye_data(60, 8)
gather_eye_data(60, 9)
start_listener()
gather_eye_data(120, 10)
gather_eye_data(30, 11)
gather_eye_data(30, 12)
gather_eye_data(30, 13)
gather_eye_data(30, 14)
gather_eye_data(30, 15)
gather_eye_data(30, 16)
start_listener()
gather_eye_data(90, 17)
108 APPENDIX D. PYTHON CODE

start_listener()
gather_eye_data(90, 18)

D.2 Code to analyse data

analyze.py

import time
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
from matplotlib.colors import LinearSegmentedColormap
import math
import os
import argparse

# Constants:
DISTANCE_AVERAGE_STEP_SIZE = 10
NUM_PERSONS_PER_GROUP = 8
NUM_TESTS = 19

a_path = "A/P"
b_path = "B/P"
col_output_path = "Maps/Colour/FigCol"
bw_output_path = "Maps/BW/FigBW"

total_gaze_distance_col = 0
total_gaze_distance_bw = 0

total_col_data_points = 0
total_bw_data_points = 0

# Holds the individual gaze distances and number of data


points for each test and each group.
# "testNum-bw/col": [total gaze distance, num data
points]
test_gaze_distances = {
"1bw": [0, 0],
"1col": [0, 0],
"2bw": [0, 0],
APPENDIX D. PYTHON CODE 109

"2col": [0, 0],


"3bw": [0, 0],
"3col": [0, 0],
"4bw": [0, 0],
"4col": [0, 0],
}

# The length of each individual test.


# Total time of tests = 930s - 0.5s*18 (for each slide)
= 911s
test_lengths = {
"1": 450 - 0.5*9,
"2": 300 - 0.5*7,
"3": 90 - 0.5,
"4": 90 - 0.5,
}

# Analyses results from each image for every person in


the group and generates corresponding heatmap.
def analyse(num):
scale = 16

map_data_col = [0]*(1080//scale)
col_data_points = 0
map_data_bw = [0]*(1080//scale)
bw_data_points = 0

# Setup matrix holding values that will be used for


the heatmap
for x in range(0, 1080//scale):
map_data_col[x] = [0]*(1920//scale)
map_data_bw[x] = [0]*(1920//scale)

# Read the data from all files, and separate them


into appropriate groups (Colour/no colour)
for i in range(1, NUM_PERSONS_PER_GROUP*2 + 1):

if i > NUM_PERSONS_PER_GROUP:
# Reading from group B
i = ((i - 1) % 8) + 1
110 APPENDIX D. PYTHON CODE

path = b_path

# Data files 1-9 are the first test, 10-16


are the second, 17 was the third one and
18 was the fourth and final.
# 1-9 and 17 were colour for group A, while
10-16 and 18 were coloured for group B
if num < 10:
# This test was BW
is_col = False
current_test = 1
elif num < 17:
# This test was in Colour
is_col = True
current_test = 2
elif num == 17:
# This test was BW
is_col = False
current_test = 3
else:
# This test was in Colour
is_col = True
current_test = 4

else:
# Reading from group A
path = a_path
if num < 10:
# This test was BW
is_col = True
current_test = 1
elif num < 17:
# This test was in Colour
is_col = False
current_test = 2
elif num == 17:
# This test was BW
is_col = True
current_test = 3
else:
# This test was in Colour
is_col = False
APPENDIX D. PYTHON CODE 111

current_test = 4

filename = "".join([path, str(i), "/Data/output",


str(num), ".csv"])
data_points = 0
if os.path.isfile(filename):
print("reading: {} ({})".format(filename, ("
colour" if is_col else "bw")), end=" ")
with open(filename) as f:
lines = f.readlines()

eye_data = []

# Add each result to relevant data


structure
i = 0
for line in lines:
i += 1
if i > 2:
spl = line.split(",")
x = int(spl[0])
y = int(spl[1])

eye_data.append([x, y])

map_y = x // scale - 1
map_x = y // scale - 1
if map_x >= 1080 // scale or map_y
>= 1920 // scale:
pass
else:
if is_col:
map_data_col[map_x][map_y]
+= 1
col_data_points += 1
else:
map_data_bw[map_x][map_y]
+= 1
bw_data_points += 1

data_points += 1
112 APPENDIX D. PYTHON CODE

# Analyse gaze distance travelled.


# Gathers average location for gaze
points at 5 steps at a time and
calculates distance between them.
step_size = DISTANCE_AVERAGE_STEP_SIZE

gaze_distance = 0
avg_positions = []
eye_data_length = len(eye_data)
for i in range(0, eye_data_length -
step_size, step_size):
eye_pos_x = []
eye_pos_y = []
for j in range(0, step_size):
if eye_data_length <= i*step_size
+ j:
break
x = eye_data[i*step_size + j][0]
y = eye_data[i*step_size + j][1]
eye_pos_x.append(x)
eye_pos_y.append(y)

if len(eye_pos_x) > 0:
avg_x = sum(eye_pos_x)/len(
eye_pos_x)
avg_y = sum(eye_pos_y)/len(
eye_pos_y)
avg_positions.append([avg_x, avg_y
])

# calculate distance traveled and sum


for i in range(0, len(avg_positions) - 1)
:
x = avg_positions[i][0]
y = avg_positions[i][1]
x_next = avg_positions[i+1][1]
y_next = avg_positions[i+1][1]

dist = math.hypot(x_next - x, y_next -


y)
gaze_distance += dist
APPENDIX D. PYTHON CODE 113

# Add distance results to relevant


structure
global test_gaze_distances
if is_col:
global total_gaze_distance_col
global total_col_data_points
total_gaze_distance_col +=
gaze_distance
total_col_data_points +=
col_data_points

test_gaze_distances["".join([str(
current_test), "col"])][0] +=
gaze_distance
test_gaze_distances["".join([str(
current_test), "col"])][1] +=
col_data_points

else:
global total_gaze_distance_bw
global total_bw_data_points
total_gaze_distance_bw +=
gaze_distance
total_bw_data_points += bw_data_points

test_gaze_distances["".join([str(
current_test), "bw"])][0] +=
gaze_distance
test_gaze_distances["".join([str(
current_test), "bw"])][1] +=
bw_data_points

print("{} data points".format(data_points


))
else:
print("File doesn’t exist: {}".format(
filename))

# Equalize the number of data points in each figure,


so the results are not skewed due to there being
more
# data points in one set of data.
114 APPENDIX D. PYTHON CODE

if col_data_points > bw_data_points:


diff = bw_data_points/col_data_points
map_data_col = [[math.ceil(item * diff) for item
in sublist] for sublist in map_data_col]
else:
diff = col_data_points/bw_data_points
map_data_bw = [[math.ceil(item * diff) for item
in sublist] for sublist in map_data_bw]

global should_plot
if should_plot:
plot(map_data_col, num, col_output_path)
print("".join(["(", str(col_data_points), " data
points)\n"]))

plot(map_data_bw, num, bw_output_path)


print("".join(["(", str(bw_data_points), " data
points)\n"]))
else:
print("")

# Plot the heat map for one image.


def plot(map_data, num, out_path):
fig = plt.figure(figsize=(16, 9), dpi=100)
fig.set_facecolor("black")
ax = plt.Axes(fig, [0., 0., 1., 1.])
ax.set_axis_off()
fig.add_axes(ax)
plt.imshow(np.array(map_data), cmap=’Reds_alpha’,
interpolation=’spline36’)
out_filename = "".join([out_path, str(num)])
fig.savefig(out_filename, transparent=True)
print("".join(["saved heatmap as: ", out_filename,
"!"]), end=" ")
plt.close(fig)

if __name__ == ’__main__’:
# Receive command line arguments.
parser = argparse.ArgumentParser("analyze")
parser.add_argument("-no-plot", dest="plot", help="
APPENDIX D. PYTHON CODE 115

Creates no plot images (runs faster)", action="


store_false")
parser.set_defaults(plot=True)
args = parser.parse_args()

should_plot = args.plot
print(should_plot)

# Create a custom colour map for the heatmaps.


num_colors = 256
color_array = plt.get_cmap(’Reds’)(range(num_colors)
)
color_array[:, -1] = np.linspace(0.0, 0.95,
num_colors)
map_object = LinearSegmentedColormap.from_list(name
=’Reds_alpha’, colors=color_array)
plt.register_cmap(cmap=map_object)

for i in range(1, NUM_TESTS):


analyse(i)

# Results printout
print("Total Gaze Distance Travelled for colour:
{}".format(total_gaze_distance_col))
print("Total Gaze Distance Travelled for black-white
: {}".format(total_gaze_distance_bw))
print()
avg_gd_col = total_gaze_distance_col/(911*
NUM_PERSONS_PER_GROUP)
avg_gd_bw = total_gaze_distance_bw / (911*
NUM_PERSONS_PER_GROUP)

print("Average Gaze Distance Covered Per Second


colour: {}".format(avg_gd_col))
print("Average Gaze Distance Covered Per Second
black-white: {}".format(avg_gd_bw))
print()
print("Total Data Points Colour: {}".format(
total_col_data_points))
print("Total Data Points BW: {}".format(
total_bw_data_points))
average_data_points = (total_bw_data_points +
116 APPENDIX D. PYTHON CODE

total_col_data_points)/2
print()
print("Average Gaze Distance Covered Normalised by
num data points, Colour: {}"
.format(average_data_points * avg_gd_col/
total_col_data_points))
print("Average Gaze Distance Covered Normalised by
num data points, BW: {}"
.format(average_data_points * avg_gd_bw/
total_bw_data_points))

for i in range(1, 5):


keyBW = "".join([str(i), "bw"])
keyCol = "".join([str(i), "col"])

avg_gd_col = test_gaze_distances[keyCol][0]/(
NUM_PERSONS_PER_GROUP*test_lengths[str(i)])
dp_col = test_gaze_distances[keyCol][1]
avg_gd_bw = test_gaze_distances[keyBW][0]/(
NUM_PERSONS_PER_GROUP*test_lengths[str(i)])
dp_bw = test_gaze_distances[keyBW][1]
avg_dp = (dp_col + dp_bw)/2
normalised_gd_col = avg_dp * avg_gd_col / dp_col
normalised_gd_bw = avg_dp * avg_gd_bw / dp_bw

print("Normalised gaze distances for test {},


Colour: {}, BW: {}".format(i,
normalised_gd_col, normalised_gd_bw))
TRITA -EECS-EX-2021:474

www.kth.se

You might also like