Julia Tennant Project Report

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 62

Submitted for the Degree of BSci in Computer Science 2019-2020

Peripheral Tablet Keyboard


JULIA TENNANT

201616633

Except where explicitly stated all the work in this report, including appendices, is
my own and was carried out during my final year. It has not been submitted for
assessment in any other context.

I agree to this material being made available in whole or in part to benefit the
education of future students.

Signed: ________________________ Date: 05/04/2020


Abstract
Tablet computers often in theory seem like the ideal middle ground between mobile
computing and desktop computing; lightweight and portable, but with a large enough screen
to carry out visual based tasks. Yet tablets come with their downsides, namely text entry.

One solution to this uncomfortable text entry issue is by using a split keyboard, but this too
has its downsides due to the constant switching of attention required to locate then press
each key. This project aims to improve that by applying research and experimentation to
create a split keyboard that encourages and assists with peripheral typing.
Acknowledgements

First and foremost I would like to thank Dr. Mark Dunlop, my supervisor for this project, for all
the guidance and advice throughout the project, especially when it came to data analysis. I
would also like to thank my second marked, Dr. John Wilson, for all the valuable feedback
on the presentation that helped improve the project.

I would especially also like to thank everyone who participated in my experiment, as well as
those gems that offered to do it without even being asked. This project would not have been
possible without you.
Contents

Abstract

Acknowledgements

Contents

1 Introduction 1
1.1 Problem Summary 1
1.2 Project Objectives 2
1.3 Outcomes of Project 2
1.4 Report Structure 3

2 Background and Related Research 4


2.1 Typing With Split Screen Keyboards on Tablet Devices 4
2.2 Capabilities of Peripheral Vision 7
2.2.1 Colour 7
2.2.2 Detail 7
2.2.3 Motion 8
2.3 Peripheral Technologies 10
2.4 Summary of Background and Related Work 11

3 Software Specification 12
3.1 Problem to Be Solved 12
3.2 Arriving at Specification 12
3.3 Specification 13
3.3.1 Functional Requirements 13
3.3.2 Non-Functional Requirements 14

4 System Design 15
4.1 Design of Project 15
4.2 Design Process 15
4.3 Experiment Development 16
4.3.1 Dot Buttons 18
4.3.2 Coloured Keys 19
4.3.3 Blurred Keys 19
4.4 Final Design Based on Results 19

5 Design and Implementation 22


5.1 HTML 22
5.2 CSS 23
5.3 JavaScript 25
5.4 PHP 28

6 Verification and Validation 30


6.1 Ensuring project satisfies specification 30
6.2 Test Procedures 30
6.2.1 During Development 30
6.2.2 After Development 31

7 Results and Evaluation 33


7.1 Final Outcome of Project 33
7.2 Experiment 33
7.2.1 Intro to the Experiment 33
7.2.2 Materials 34
7.2.3 Method 35
7.2.4 Data 37
7.2.4.1 Typing Errors 37
7.2.4.2 Touch Accuracy 38
7.2.4.3 Typing Speed 39
7.2.4.4 User Preference 40
7.2.3 Discussion 41
7.3 Final Evaluation of Design Created 44
7.4 Lessons Learnt 45

8 Summary and Conclusion 46


8.1 Success of Objectives 46
8.2 Issues that Occurred 46
8.3 Further and future development 47
8.3.1 Evaluation Experiment 47
8.3.2 Word Correction Algorithm 47
8.3.3 Further Additional Characters 48
8.3.4 Creating Keyboard 48
8.4 Final Conclusion 48

9 References 49

10 Appendix A- Experiment Results Data 51


10.1 Sample of Touch Data 51
10.2 Sample of Errors Per Phrase 52
10.3 Participant Preference 53

11 Appendix B- Experiment Further Details 54


11.1 Participant Instructions 54
11.2 Phrases Typed (Including Practice Phrases) 54

12 Appendix C – Detailed Test Results 55

13 Appendix D - User Guide 56


13.1 Prerequisites 56
13.2 Accessing 56
13.3 Use 56
1 Introduction

1.1 Problem Summary

With large screen tablet devices, text entry can often be a slow and uncomfortable
experience thanks to the software keyboard that is default for almost all devices of this
nature. The keyboard takes up just under half the screen space to allow for emulation of the
positioning of the keys of a physical QWERTY keyboard, the standard for most PC’s and
laptops. While this keyboard works well for these devices, a tablet screen does not facilitate
this ten-finger typing style that is needed for a keyboard of this size and portable nature.

Instead of this hard to use layout the split keyboard model has been growing in popularity,
where the keyboard is split in half and each half placed in a lower corner of the tablet screen.
This however also has drawbacks, mainly eye and neck fatigue from constantly having to
shift focus between three points on the screen (each half of keyboard and the text output
area).

This shifting focus and its effects was observed by Lu, Yiqin, Chun Yu, Shuyi Fan, Xiaojun
Bi, and Yuanchun Shi in their paper “Typing on Split Keyboards with Peripheral Vision”. They
determined that when a user types with the split keyboard visible and keeps their eyes
focused on the text output area it provides the optimal speed vs accuracy payoff, as they can
see the key location in their periphery but don’t have to locate each key when they want to
type it.
Even though this method is optimal for typing, standard split keyboards do not encourage it,
so users cannot be expected to adhere to it.

The ideal split tablet keyboard would encourage peripheral typing by assisting the user’s
peripheral vision in detecting keys, whilst also discouraging them to need to look directly at
them.

Page | 1
1.2 Project Objectives

The main objective of this project was to create the optimal split tablet keyboard to facilitate
peripheral typing. As said, this would both discourage looking at the keys whilst
supplementing the benefits of using peripheral vision. This project had five objectives to
achieve this:
• Establish an understanding of the capabilities and limitations of peripheral vision in
the human eye
• Develop an array of keyboard designs that assist with peripheral typing whilst also
discouraging the user’s vision from switching between the text output area and the
keyboard itself
• Of these designs, determine the optimal features and style by conducting an
experiment to measure both the speed and accuracy of each whilst also taking into
consideration user preference and any external factors that may impact their
decision.
• Create a working spelling correction algorithm that works while the user is typing, to
reduce the need for slowing down and correcting misspelled words.
• Create a working prototype keyboard that consolidates the research and results into
its final design.

1.3 Outcomes of Project

The outcome of this project has been a successful prototype peripheral tablet keyboard,
that’s design encourages peripheral typing from it’s users, and thus increases accuracy and
speed when compared to typing peripherally with a standard split keyboard.

The keyboards that were evaluated in the experiment showed that overall accuracy was
improved by having dot buttons and that speed was improved by a number of factors,
detailed more in the analysis of the experiment.

This all contributed to the final design of the keyboard, which combined the favoured
elements of each style to result in the final style of keyboard.

Page | 2
1.4 Report Structure

This report will essentially follow a linear progression of how the project took shape. The
following section details the background research carried out to develop the design of the
keyboard.

The next chapter will detail the background research that was required to gain an
understanding of the abilities that human beings' peripheral vision is capable of. It will also
further explain the background paper by Yu et al, as well as any similar research or products
that exist. Following that, the software specifications and the system design.

Appendixes are at the end to provide statistical data such as experiment results that will be
referenced throughout the report, as well as the user manual for using the final design of the
project.

Page | 3
2 Background and Related Research

This project has three main themes of the research that were necessary for development.
The first is typing technologies on mobile devices, which includes the paper that was the
basis for the project. The second is how this all relates to a person’s peripheral vision, and
the issues and advantages that creating something intended for peripheral use poses. The
final section is on the notion of peripheral technologies. In the first chapter, it was mentioned
that having to constantly switch a user’s focus from the task they are completing and the
means of doing it they often underperform in the task, this section contains further evidence
of this and how peripheral technologies improve performance of tasks.

2.1 Typing With Split Screen Keyboards on Tablet Devices

Tablet computers are ideal for tasks that involve much visual output, such as watching
videos or presentations, or playing mobile games. When it comes to tasks that involve
textual input, they are often reported to feel lacking (Lenovo, 2019) which may discourage
users who wish to use them for tasks like report writing and email correspondence. This
problem is further emphasised by users having to spend more money purchasing physical
hardware keyboards that allow them to use the device similar to that of a laptop, defeating
the purpose of the portable functionality of the tablet. People like to type with a keyboard that
feels familiar, so they choose the hard QWERTY over the soft onscreen one. The reason for
this is that people who use ten finger typing use memory and key feel to work out where
each key is quickly, the onscreen keyboard also does not provide the necessary haptic
feedback of the buttons that physical keyboards do. Attempting to emulate the size and
positioning of a hardware keyboard also results in the onscreen one taking up a large
fraction of the screen, defeating the benefits of having the large screen to carry out more
visual tasks in the first place.

Instead, we turn to a method of typing that users are accustomed to on smooth


touchscreens; two thumb typing as often done on touchscreen phones. By splitting the
keyboard in half and having each half opposite bottom corners, users are free to hold the
device and use their thumbs to type at the same time. This gives the tablet back some of its
portable functionality. Splitting the keyboard and reducing its size also allows for less of the
screen to be blocked and gives more space for visual content. A more well-known example

Page | 4
of this splitting is the KALQ keyboard (Oulasvirta et al., 2013), created by researchers at the
University of St Andrews, the Max Planck Institute for Informatics and Montana Tech. KALQ
is named after the layout of the keys, as the keyboard doesn’t use a Qwerty layout, instead
one chosen based on thumb movement and frequency of letters. This seems like the ideal
solution to the problems with tablet typing, however there is a reason that other layouts of
keyboard have not taken off and become as popular and widely used, even if they are more
efficient. It is simply not in the user's best interests to take the time to learn to type with a
whole new position of keys, even if it will eventually improve their text input efficiency such
as in the case of the KALQ keyboard. It requires practice and frequent use to initially
become familiar with the layout of the keys. It is also subject to the common issues that arise
when using any kind of split tablet keyboard.

The basis of this project is the aforementioned “Typing on Split Keyboards with Peripheral
Vision” paper by Lu et al (2019). In this paper, it is discussed that typing with split keyboards
means that users have to alternate their gaze between the text output area and each half of
the split keyboard, which causes eye and neck fatigue and reduces typing speed. Their
research tested the hypothesis that if users were encouraged to type with their eyes focused
on the text output area rather than on the keyboard itself, would they improve their typing
speed and accuracy.

Their research involved their own design of a split keyboard, GlanceType, and numerous
user studies. One such study, they conducted analysis on both the speed and accuracy of
different modes of viewing whilst typing; Eyes-on, Peripheral, and No-Keyboard. Eyes-on
meant that users would look at each key they wanted to type as they typed it, switching
between the text output field and the keyboard. Peripheral meant that users were asked to
look only at the text output field, using their peripheral vision to see the key location and
stopping the switching back and forth. No-keyboard mode worked by hiding the keys from
view, meaning users had to rely on their muscle memory and spatial memory of where the
keys were. This gave them a range of data that showed a spread of touch points for each
typing method, validating their suspicions that Peripheral typing would be much more
accurate than No-keyboard typing, however users could still type in relatively the correct
positions using No-Keyboard typing, usually just with a slight shift in positioning, as seen in
the following mapping of collated touch points:

Page | 5
[Image from Lu, Yiqin, Chun Yu, Shuyi Fan, Xiaojun Bi, and Yuanchun Shi. 2019. “Typing on Split Keyboards
with Peripheral Vision.” In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems,
200:1–200:12. CHI ’19. New York, NY, USA: ACM]

They then conducted a different experiment that used eye tracking software and measuring
input speed, asking users to perform similarly as the first experiment with Eyes-on and
Peripheral modes of typing. Peripheral mode greatly overtook Eyes-on mode in terms of
words per minute, a speed increase of 27.6% due to the fact that users didn’t have to locate
each key with their eyes before pressing it, instead they relied on a combination of muscle
memory and peripheral vision. When it came to errors in typing there were two different
kinds of error taken into consideration; corrected error and uncorrected error, both of which
were considered when looking at the error calculation. In Eyes-on typing, corrected errors
were 1.3% and uncorrected errors were 0.2%. In Peripheral typing corrected errors were
4.7% and uncorrected errors were 0.4%. This shows that typing with Peripheral mode is
slightly less accurate, however a decent language model working with the keyboard input
should allow for this to be corrected even considering the potential noise of the user’s touch
points on the screen.

The research concludes that the peripheral typing method overall has the combined benefits
of not requiring as much (if any) gaze shifting of the user, whilst still maintaining much more
accuracy with typing than not being able to see the keys at all. The question remains,
however, how to further encourage users to use their peripheral vision and discourage them
from using their direct vision.

Page | 6
2.2 Capabilities of Peripheral Vision

Naturally, to create a keyboard that works with a user’s peripheral vision we first have to
understand all we can about what can actually be detected in a peripheral view, and also
what can’t.

For ease of understanding this will be broken down into categories: colour (2.2.1), detail
(2.2.2), and motion (2.2.3).

2.2.1 Colour
In terms of colour, it is a misconception that this cannot be accurately detected with a
viewer’s peripheral vision. Tyler (2015) explains that this is not the case, stating that even
though online encyclopaedias like Wikipedia claim it to be true much evidence is provided on
the contrary. He presents various images as examples, each of which demonstrate that even
when varying the size of the colour objects it is still possible to detect and differentiate
between them in a viewer’s periphery.

This knowledge of colour is what was used to create many of the potential designs for the
experiment. If colour can be detected peripherally, it may potentially be effective at
differentiating between letters or locations on the keyboard provided the colours have
enough contrast.

2.2.2 Detail
A significant consideration that needed to be made when formulating the designs was also
the level of detail that can be detected by peripheral vision. Hitzel (2015) explains how
information in a person’s peripheral vision is less accurate than if they were looking at the
information directly (also known as foveal vision), but allows for a wider array of information
to be taken in. She claims that fixating on an object provides high quality information about
the object, but much peripheral vision is lost. Focusing on the edge of an object means that
a large amount of information can still be gathered from the object, as well as a substantial
amount of peripheral vision. Not focusing on any specific objects allows for less information
to be processed about individual objects, but a larger range of information can be
established from the peripheral view.

This is illustrated in the following diagram:

Page | 7
[Image from Elena Hitzel, 2015, Effects of Peripheral Eye on Movements: A Virtual Reality Study on Gaze
Allocation in Naturalistic Tasks, Springer Fachmedien Wiesbaden]

Hitzel’s work is related to peripheral vision in virtual reality, however the same concepts can
be applied to this keyboard project. Whilst focused on the text output area users can still
establish a range of information from their peripheral field of view. This fact was taken into
consideration when designing the styles of the keyboard to be evaluated. Users not being
able to view detailed information from their peripheral gaze determines that the keyboard’s
keys should not contain small details or be information heavy, instead aiming to give them all
the information they need to press each key without them having to divert their main line of
vision.

2.2.3 Motion
A way of doing this is suggested by the work of To et al (2011). Their research is essentially
into what can be detected by each range of peripheral vision, as shown in the following
diagrams (The angle of the peripheral keyboard on tablets being approximately 30° making it
between near peripheral and mid-peripheral):

Page | 8
[Image “Peripheral vision” by Zyzwv99, made available under Creative Commons Attribution-Share Alike 4.0
International license]

[Image “Field of view” by Zyzwv99, made available under Creative Commons Attribution-Share Alike 3.0
Unported license]

They did this by conducting a series of experiments that helped determine specifically which
angles in the range of peripheral view can detect information. Their research suggests
objects that have motion or change in some way can be detected with more ease with all
scope of peripheral vision. This provided some guidance on what design the keyboard
should take; a rapid change in information could be more distracting than helpful, drawing
the user’s attention towards it instead of encouraging it to stay away, or it could improve the
design by being more recognisable in the viewer’s periphery. Having features of the keys
change in a subtle way could help make the user more aware of their locations without
drawing focus from the text output area. It also highlights the idea that a user may struggle to
read text using peripheral vision, something that is also emphasized by Husk and Yu (2017)

Page | 9
in their paper focusing on letter recognition with patients who have lost their foveal vision.
Both these assert the need for utilisation of shape recognition of letters, rather than having
the users attempt to actually read the letters from their periphery.

2.3 Peripheral Technologies

Keeping the user’s focus on the task at hand, rather than having them switch their focus
between the task they are trying to accomplish and the technology that allows them to do it,
draws from the concept of ubiquitous computing. Ubiquitous computing comes from ideas
proposed by Mark Weiser (1991) and his vision of the progress of computing, where
technology blends into the background of everyday tasks and the user’s focus. This idea of
peripheral technology that blends into the background was the basis for the “ShuffleBoard”,
created by Edge et al.

ShuffleBoard is a piece of hardware designed to aid communication and tracking of project


progress in the workplace. Tokens are placed on an area under the monitor, the users
glance at them periodically and add to them when required, allowing them to share their
status and activity of other team members without having to directly communicate with them.
It gives information about various peripheral tasks, stating that any peripheral interaction can
remain peripheral if the workload it causes does not require so much attention and resources
that it becomes the main focus of a user’s attention. This draws from Weiser’s ideas of
subverting the idea that technology needs to be directly focused on to be used and
interacted with.

ShuffleBoard was investigated in regards to peripheral interaction with technology, not


necessarily vision, but it still carries some weight in regards to how a user’s focus
determines their productivity. If their focus is constantly switching they will be less focused
on the task at hand, but productivity can be increased if they are focused on the task at hand
and then can use their peripheral awareness to gain additional information that assists them
whilst working. Shuffleboard was designed to reduce the need for interrupting working, and
the fact that shifting focus can reduce efficiency of work is applicable to the Lu et al.
research, where the peripheral typing meant optimal speed and accuracy of typing.

This text also expresses a need for more peripheral technologies in a climate where much of
the technology we interact with is designed with the direct intent to have the user’s constant

Page | 10
focus, discouraging the use of peripheral attention. The idea that technology should “blend
into the background” of our daily use is paramount, and this keyboard will allow typing on
tablets to do exactly that, allowing the focus of the user to be what they are typing and
composing rather than shifting between that and the means of text entry.

2.4 Summary of Background and Related Work

After reading and analysing the background and related work, the information was used to
create the designs that would best support peripheral typing. As stated above, the use of
colour, motion, and the amount of detail each key should feature play an important role in
what the user can see whilst typing. All of these were considered to conclude that the
experiment should feature three designs that were based on each consideration to
determine which elements were most effective when it came to typing.

Page | 11
3 Software Specification

3.1 Problem to Be Solved

The original specification for this project was to develop a prototype app that blurs the
keyboard to remove the letters so that users are aware of their location but aren't
encouraged to look directly.

3.2 Arriving at Specification

The basic specifications for this software was arrived at through analysis of the standard
features and capabilities of a software keyboard. Namely the keyboard buttons themselves,
the text output that results of the buttons being pressed, the letter checker for typing
verification, and the spelling corrector that corrects common mistypes. All of these must be
responsive and work quickly to ensure that the user does not notice any kind of delay that
might prove to be a distraction from the task that they are trying to accomplish.

The issue of encouraging peripheral typing is another matter. A significant amount of


research was carried out into the capabilities and limitations that a human’s peripheral vision
has to offer, and a combination of the strongest capabilities was collated to form potential
designs of the keyboard that would then make up the basis of a user experiment. The
purpose of this experiment was to determine which style provided the best speed/accuracy
when typing, as well as which one users generally found easiest and most natural to use.

The results of this experiment were then analysed to form a final design of keyboard. This
did not necessarily mean one specific design was better than the other and then chosen,
rather that the experiment provided insight into what elements of each design were more
beneficial to the goal of the project.

The final design was the optimised version of the design that was tested in the final
experiment, adding things that are standard to keyboard typing that were not used for the
experiment, such as adding a cursor to indicate where will be typed next, and having this

Page | 12
cursor able to be moved by the user. Further capabilities for typing were also added; namely
the addition of a numbers menu as well as an additional characters menu. Both of these
appear in the same area as the keyboard. The choice was made not to implement the
peripheral design with these select keys, as it would be assumed that if the user was typing
a character that they wouldn’t normally use in a word or phrase (ie. anything that is not a
member of the standard English alphabet or “,” or “.”) they would need to look directly at the
keys anyway. These features were not implemented before with the prototypes or the
experiment, this was to avoid any potential distractions that may be experiences by
participants and affect the outcome of the results.

The result was a final design that had full functionality that would be expected of a keyboard,
a text output area, and a design that was optimal for encouraging peripheral typing in its
users, therefore reducing the need for a constant shift of focus between keys and allowing
for complete focus on the task at hand.

3.3 Specification

3.3.1 Functional Requirements

The functional requirements for the final design are as follows:

1 The basic requirements for core functionality are:


1.1 User can type on keys and correct characters appear in text output area
1.2 User can press backspace and erase last typed letter
1.3 Shift key can be pressed by user, and next key pressed will output the upper case of the
corresponding letter
1.4 Simple error correction of each word

2 Experiment functionality
1 All basic requirements
2.1 Three designs evaluating each aspect in assisting peripheral vision
2.1.1 Dot Buttons
2.1.2 Coloured Keys
2.1.3 Blurred Keys
2.2 Tracking of touch coordinates and times of participants
2.3 Tracking of corrected and uncorrected errors

Page | 13
2.4 Displaying phrases to be typed above text output area

3 Final requirements
1 All basic requirements
2 Optimal features carried forward from designs evaluated in 2
3.1 Caret dedicates insertion point of text and is visible on screen
3.2 Caret can be moved on text by user
3.4 Numbers and additional characters can be typed

3.3.2 Non-Functional Requirements

• Cross-platform compatibility and portability


• Quick response time (67 words per minute has been recorded as the inviscid entry
rate for typing on mobiles (Kristensson & Vertanen, 2014). This is the point where the
user themselves are the only thing slowing down the text entry, not any software. The
keyboard will have to allow for this speed of text entry without any delay.)
• Usability (Design and user interface should be intuitive)
• Robustness (System should be able to handle any unexpected input or unexpected
termination)
• Instructions for setup/using keyboard

Page | 14
4 System Design

4.1 Design of Project

The design of this project is rooted in the basic split keyboard layout that has been created
by other developers to improve upon the current design of keyboard that is standard with
tablets, then with the added benefit of reducing the reliance at looking between each
keyboard half and the text output area.

When designing this project, the first consideration was how to create a comfortable
keyboard that functioned the same as a standard split one would. Once this was completed,
research into the capabilities of the human peripheral vision were used to create an amount
of designs that would optimally assist with using peripheral vision whilst typing.

A sample of these designs was then presented to participants in an experiment to determine


which style provided the optimal speed/accuracy payoff, and which elements were ideal to
be carried forward to the final design.

The final design was then developed from the analysis of the experiment results, which also
included additional features for use.

4.2 Design Process

The first step was ensuring all the keyboard requirements were met and testing these in
terms of functionality. Upon testing this, typing with the current layout proved to be difficult
when typing phrases containing the letters “t”, “g”, and “v”. This is due to the positioning of
the user’s thumb in relation to the key location. If the user types a letter on the left hand side
of the keyboard then one in the middle, they will likely use their right thumb for the key in the
middle as they can position it before they have finished typing the previous letter. Same is
true for the opposite side, they will use the opposite thumb to reach the centre. This is
displayed well if we look at the two three letter words “cat” and “cot”.

Page | 15
Figure 1: Letter Spacing

The “t” of cat will be typed with the right hand as the left is busy typing the “a”. On the other
hand, if the right hand is typing the “o” of cot, the left is free to type the “t”.

For this reason, the design choice was made to repeat the centre column of the keyboard on
each side, as seen in the image above.

Additionally, one of the basic requirements of the keyboard was determined to be a word
correction algorithm that would correct typed words. For the experiment, an algorithm was
developed that was based on finding the Levenshtein distance (or edit distance) between
two words. The Levenshtein distance is how many steps it takes to transform one word into
another using substitution, insertion, and deletion of letters. For simplicity, only substitutions
were checked. This accounts for the most common kind of typing error on touch screen
keyboards; rollover typing errors, where one letter is pressed before the one prior to it in the
word has been typed. The algorithm was used for logging how many mistypes were made
within words during the experiment, then a more thorough version of the algorithm was
modified and used for the final design.

Once the keyboard was working and the additional styles were decided on, the experiment
was created to determine the optimal style of keys.

4.3 Experiment Development

Designs based on prior research into peripheral vision were created. While many were
considered, the final three that were used in the experiment were what was believed to
present a good range of which areas of peripheral vision could be optimised to benefit the
final design.

Page | 16
The experiment environment featured a number of phrases contained in a text file that the
users would type with each style of keyboard. Each style contained three phrases to be
typed, with the exception of the practice phrases and the standard split keyboard, which
were used at the start to get the participants used to typing with the split keyboard and get a
feel for the size and weight of the tablet device to be used.

The phrases to be typed were selected from a compiled list of memorable phrases from the
Enron mobile phrase set, and chosen because they were a reasonable phrase that
participants would be familiar with typing. They were all between 17 and 44 characters, a
reasonable size for a short to medium length sentence.

In the experiment a number of factors were recorded and monitored. When participants first
accepted the terms and conditions they were assigned a random number as their user ID,
this was used to differentiate between individuals whilst also remaining anonymous. It was
stored as a session variable, to be used throughout.

Once they had accepted the terms and conditions they were presented with the standard
split keyboard with no styles applied and given two test phrases that they could type to get
used to the size and shape of the keyboard. At this stage nothing was recorded.
When participants were finished with the practice phrases, they were instructed to type with
their eyes on the text output area as much as possible, and keep the keyboard in their
peripheral view. Each participant was presented with the three styles in an order that
ensured that there were an even amount of participants with each order, and randomly
assigned the order. This prevented one style constantly being the final style to type with,
which would have caused the style to appear better as participants get faster and more
accurate the more they type.

Each time the participant touched the screen, the data to measure their accuracy and speed,
as well as identify them, the keyboard style applied and the phrase they were typing.
This was done by having a PHP file that processed the touches, and this was sent the data
of every touch using an XMLHttpRequest and sending it via that to the PHP file on the
server.

As they typed, the number of manual corrections using the backspace was recorded, as well
the number of times the word correction algorithm corrected a word. When they finished that

Page | 17
particular phrase, this count was processed in the same way as the touch data, the
information being sent to a PHP file which then processed it in the necessary way.

The original plan was to store the experiment data locally on the device being used in the
experiment, but this was ruled out due to the uncertainty of the capabilities of the device.
Instead, choosing to store the data on the server while it meant extra steps were required to
document it, it ensured the data remained safe in the event of any corruption or damage that
might have occurred to the device. The only requirement of it working in this way was to
provide the device with a reliable internet connection, which was needed anyway to access
the keyboard.

The experiment featured the following designs:

4.3.1 Dot Buttons

Figure 2: Dot Buttons Style


This design of key completely eliminates the user’s ability to look at the keys. It's a
bolder choice than the others, due to the fact the letters are not visible. Because of this, it
was decided that the letters could be revealed upon tilting the device at an angle. Overall, it
was expected that this design would result in more errors than the other designs.

It reflects back to the idea that users cannot perceive small details such as letters in their
peripheral vision, only the rough shapes. The idea was that users could better differentiate
between keys if they were more rounded, rather than the grid-like shape of the square keys
where they tend to be harder to pick out.

Colour was also used here to highlight the home keys, the group of keys where your fingers
are intended to rest whilst touch typing. The idea here was that subconsciously users would
know which keys these were and use them as a sort of anchor to work out the surrounding
keys without having to look, similar to how the home keys are used in a physical keyboard.

Page | 18
4.3.2 Coloured Keys

Figure 3: Coloured Keys Style


The purpose of this design was to use the fact that colour appears more vivid in peripheral
vision to assist users in differentiating between keys. Deliberately bright and contrasting
colours were chosen for this to further use this to the keyboard’s advantage. These keys
have the letters visible on them, though the size of the keys themselves and the letters was
smaller than most people would be used to. This was to make it harder to look at, as well as
provide some more definition to where each key was.

4.3.3 Blurred Keys

Figure 3: Blurred Keys Style


This design was also meant to reduce the user’s ability to see the keys. The blurring would
fade in and out as the user was typing, independent of what they did (unlike the dot buttons,
which was triggered upon tilting the device). It also relied on the idea that objects that
changed visually in some way were more easily picked up on in the periphery than those
that are static. Having the shifting would limit the time that users could see the keys clearly,
and this would further encourage them to rely on their peripheral vision to see the keys. Else,
the shape of the letter was still visible even when the keys were blurred, meaning users
would be able to tell which key they were typing.

4.4 Final Design Based on Results

This final design takes the shape and size of the button keys that increased accuracy whilst
typing and combines it with the coloured keys, which increased user awareness of the actual
letter locations (see section 7.2 for experiment). This combats the issue of the size of the
letters on the coloured keys, whilst also getting rid of the issue of letters being completely
hidden.

Page | 19
When at a comfortable typing angle the letters will appear slightly hidden, like so:

Figure 4: Final Design (without tilt)

Upon tilting at an angle, they will appear black instead:

Figure 4: Final Design (with tilt)

This allows users to be aware of their general shape upon a glance to emulate that same
quick locating that the blurred keys and coloured keys provided. If they need to get a better
look at the letters on the keys, the device can be tilted back to revert them to black. This
takes the tilting reveal element that was present in the original dot buttons design, but less
extreme.

The final design also features a moveable cursor, represented on screen as a “|”. It can be
moved using arrow keys located slightly above the keyboard. These arrows will move the
cursor left and right and allow the user to edit anywhere in the string they have typed. These
were positioned to appear over the keyboard because for the user, having to reach up and
use their finger to drag the cursor is actually less practical, as it requires users to let go of
the corner base grip they have on the device. The whole idea of splitting the keyboard
means that users are able to maintain this more secure grip and having to reach up and use
their index finger breaks away from this stable positioning. Hence having these buttons
allows for repositioning the cursor without losing the grip benefits.

Page | 20
Figure 5: Number and additional characters

The extra characters such as numbers and punctuation were added at the end of the design
process. As these buttons are often located or accessed differently on different devices,
there is no set layout that should be followed for familiarity in the same way the Qwerty
layout is. It was opted to have the numbers and additional punctuation characters separate
from the main keyboard, unlike a standard hardware keyboard and instead have it behave
more like a mobile one, where these elements appear at the press of a key. Rather than
having the keyboard itself change into a numbers and symbols keyboard like a standard
tablet or mobile device would do, the decision was made to have numbers and additional
characters be placed in separate segments that appear over the keyboard when the
corresponding button is triggered. This gets rid of any potential confusion that might occur if
(for example) the numbers were split down the middle in the same way the Qwerty keyboard
is. As this is fundamentally a prototype keyboard, not all punctuation characters, special
characters, emojis etc have been included. The characters that have been included were
mostly the characters that would appear on the top row of a physical keyboard. Where
appropriate, following punctuation being typed a space will automatically be added, ready to
type the next word.

For the final design the word correction algorithm was modified to a degree where if the word
typed does not have a close enough match within the dictionary file it won’t be corrected.
This is to cover the instance of the user typing a word that is correct, but simply does not
exist within the file. It also checks to see if the first letter of the word is capitalised, and if this
is the case, the corrected word is also capitalised in turn. In the experiment these were not
needed as the input could be controlled so the users were only required to type words that
featured in the dictionary file, and elements such as correct grammar were not being
measured.

Page | 21
5 Design and Implementation

The code for the project has been completed in HTML, CSS, JavaScript, and PHP. Initially
the system was planned to be built using native code to make use of external code to allow
for eye-tracking to be implemented, so that an experiment that was used in the Typing on
Split Keyboards with Peripheral Vision (Lu et al, 2019) paper could be emulated and built
upon in the design. Unfortunately the software for eye movement tracking was unable to be
obtained, and not using this led to it making more sense to create the project as a web app
instead of a native one, which posed no major issues when it came to development.

5.1 HTML

HTML was used to create the main structure of the project. It was chosen because of its
simplicity and it’s compatibility with other languages such as JavaScript and PHP. HTML
was used to designate the area where text would output to, and to create the buttons that
would form the basis of the keys to be pressed.
To identify each HTML element that would be interacted with, each element was assigned a
unique ID. This allowed for ease of referencing for specific elements when they needed to be
interacted with by JavaScript code or CSS.
For each key, the id that was assigned was the letter that it represents with the exception of
the repeated keys that used the format of “letterr” or “letterl” depending on which side of the
keyboard they were positioned on respectively. This allowed for the same JavaScript code to
be triggered each time a key was pressed, passing the unique id as a variable to be
processed and typed into the output area. For example:

<div class = "left">


<button id="q" ontouchstart="typeLetter(this.id)"...>q</button>
<button id="w" ontouchstart="typeLetter(this.id)"...>w</button>
<button id="e" ontouchstart="typeLetter(this.id)"...>e</button>
<button id="r" ontouchstart="typeLetter(this.id)"...>r</button>
<button id="tl" ontouchstart="typeLetter(this.id)"...>t</button>
</div>

Page | 22
This was done slightly differently when it came to non-alphabetical characters, but instead of
using the actual character as the id, the ASCII code was used. When this is printed as output
it simply appears as the intended character being typed.
The use of ontouchstart was implemented instead of onclick to better track the touch
accuracy of users when typing. It made more sense as users would be touching the screen
with their fingers, rather than clicking accurately with a mouse.

A similar format was used for functional non-letter keys, such as the cursor buttons, where
the id was either “left” or “right” depending on the direction intended:

<button id="left" ontouchstart="moveCursor('left')"...>&#9668</button>


<button id="right" ontouchstart="moveCursor('right')"...>&#9658</button>

HTML classes were used to create the rows and columns of the keyboard buttons. Using
classes instead of id meant that there could be multiple row divs in each right or left column
that then had the same CSS applied to keep them in alignment. So each row would have it’s
div in the class “row” and within this row class, each button would either be in the “left” class
or the “right” class.

5.2 CSS

CSS was used to manipulate the HTML to achieve the desired and user friendly look. As
appearance was such an important part of this project’s design the value of CSS cannot go
unacknowledged.

CSS was used to ensure that each key in the keyboard is at the correct position on the
screen. This position is close enough to the corners of the screen so that all keys can be
comfortably pressed from a holding position, but not so close that users have to strain to
reach the innermost keys. The keys also must be far enough apart to ensure that they can
be differentiated from one another, but not so far that users will press the whitespace
between them instead of the edge of the key. This was proven in the experiment to be most
accurate using circular buttons, the shape of which is modified by CSS using the border-
radius element applied to every button within the keyboard.

Page | 23
.keyb button{
border-radius: 50%;
background-color: white;
border: black;
}

The default colours of the keys are black and white instead of the decided colours. This is
because a JavaScript function will change the style of the buttons upon detecting a certain
change in the device's angle. This is the previously mentioned tilt function that discourages
users from simply looking at the letters and instead use their peripheral vision.

With the CSS it also stops users accidentally selecting the actual text that is on the keys, as
sometimes if the buttons were long held the device would automatically select the text and
highlight it. This was combatted using the following code applied to the whole page:

*{
-webkit-user-select: none; /* Safari */
-ms-user-select: none; /* IE 10+ and Edge */
user-select: none; /* Standard syntax */
}

When the blurring was added to the keyboard for the experiment this was also done in CSS
as the following:

animation: bshift 9s infinite;


animation-direction: alternate;

The shift between focused and blurred was set to 9 seconds, which was determined in the
testing of the experiment to be roughly the time it would take to type the phrases. Alternating
it meant that the keys would slowly become clearer again if long enough was left, so this
meant that if participants wanted to type while it was clear they would have to wait 9 seconds
again, thus be further discouraged. Initially, this was going to be completed in JavaScript,
having the blurring applied on the user tilt, similar to how the dot buttons reveal works.
Unfortunately, this was limited by the capabilities of the device being used as blurring the
element with JavaScript did not work as intended. Instead this actually worked for the better,
as it meant that the subject of motion and how it can be detected by the peripheral vision
was able to be evaluated as a feature in the keyboard.

Page | 24
5.3 JavaScript

JavaScript was used to add both basic and more advanced functionality to the project. It is
essentially what allows it to work as it should.

As mentioned in the HTML section, when each key is pressed it triggers a JavaScript
function in order to output the correct letter or character it in the text output area. This is
done by first determining which character is being selected; normally it is the same as the id
but in the event that the key pressed is one of the central keys that is repeated then this
information must be extracted before the letter can be output. If the shift flag is set to true
after pressing the shift key, the next letter to be typed will be set to uppercase, upon which
the shift flag will be set back to false. Once it has determined which letter to output, the
function looks to see the position of the cursor, and it types the selected letter after the
position of the cursor. Within the body of text, the cursor is treated as a character, but when
the text data is processed the cursor is used to separate the text and determine where in the
string the typed character will go. This is due to the limitations of the HTML Document Object
Model where an individual character cannot be processed as an object on it’s own, but this
just meant that implementing the cursor and its movement was a little more tricky (not
impossible).

The cursor being a character means that when the user presses the button to move the
cursor, they are actually re-writing the string with the cursor position shifted by one. When it
is being modified, the string is treated as an array of characters. This allows new characters
to be added regardless of the surrounding elements.

Using the backspace button triggers very similar code to the left moving cursor, but instead
of swapping the elements of the array, it removes the one to the left of the cursor and
replaces it with the cursor itself.

Once the space key is pressed this triggers an additional function before the character typing
function. This function corrects the spelling of the previously typed word if it has been
misspelled. This same function is also triggered when punctuation is typed, followed
automatically by a space. This is done as most other mobile keyboards will do the same
thing to ensure faster, more correct punctuation when typing.

Page | 25
To correct the words after they have been typed a modified version of the Levenshtein
algorithm was used to achieve this desired functionality. This algorithm is commonly
implemented to determine the “distance” between two strings, by adding up the different
modifications needed to transform one string into the other. These modifications are
insertion, deletion, and substitution. For the sake of simplicity and for catching the most
common types of errors, the decision was made to only look for substitution transformations.
This means that any rollover typing errors (where key y is pressed then key x, when the
intended order would be xy) are caught and replaced with the correct word.
The correct word is determined by first comparing the typed word to all the words in a
dictionary text file that is arranged by frequency of use, rather than alphabetically. If it finds a
match then the word is correct. If no match is found it continues to the next step; looking at
the length of the word and comparing this the length of each word in the dictionary. Each
word of the same length in the dictionary is then compared to the typed word, and the
differences in letters are computed. The word with lowest number of differences (ie. the
lowest edit distance) is determined to be the most likely match to the typed word, and
therefore replaces the typed word.

function findDifferences(text, entry) {


var count = 0;
for (var i = 0; i < text.length; i++) {
if (text.charAt(i) !== entry.charAt(i)) {
count++;
}
}
return count;
}

(Where text is the word being compared, and entry is the word from the dictionary file)

In the event that the number of differences is equal to the length of the word (ie. every letter
had to be changed to transform the word to one within the dictionary) the word is not
replaced. This is done because it cannot be verified that the replacement that would be
chosen is realistically the correct word. It also may be the case that the word is correct but
simply does not exist within the dictionary.

Ideally the algorithm would have been more accurate, looking at all possible types of edits to
determine truly the lowest edit distance, but the decision to only have one type of difference
be checked drastically speed up the development of this area of the project. If time

Page | 26
constraints were less of an issue, it would have been good to catch all kinds of typing errors
such as missing letters or looking at common misspellings of words.

The dictionary text file is loaded from the server using a XMLHttpRequest as soon as the
page is loaded. This file is then parsed into an array. Doing this ensures that even if the
connection on the device drops out while in use, the dictionary data is already loaded and
able to be used as needed. The option to have the comparison happen on the server and
have the correction be sent back to the page was considered, but any event of the
connection being interrupted means that this process could be slowed for users and
therefore make the keyboard feel slow and clunky to use.

The dictionary file itself was based on Dr. Dunlop’s teaching file, 77k.txt, which is a count of
the frequency of words taken from a large sample of newspapers. The file was retrieved with
his permission and modified to include new words, as well as exclude any irrelevant words
(such as unusual acronyms) that may have resulted in incorrect spelling corrections and
reduced the accuracy of the algorithm.

This same technique was used in the experiment code to load up the phrases that the
participants had to type. Having them in a text file that could be accessed was ideal for
occasions when the phrases needed to be edited, because unfortunately JavaScript cannot
access local files to be read or written, so having the file on the server was a necessity.

The function to detect the motion of the device is triggered when the page is loaded
automatically, same as the function to load the dictionary file and parse the data. This
function measures the acceleration of the device to determine if the device is being tilted and
if so at what angle by using an event function.

var y = event.accelerationIncludingGravity.y,
z = event.accelerationIncludingGravity.z, pitch;
pitch = Math.atan(y / z) * 180 / Math.PI;
pitch = Math.round(pitch);

If the angle of tilt is greater than 30, the keyboard buttons all appear as coloured with the
font also coloured in a similar shade. When the device is tilted, and the angle of tilt is lower
than 30 then the font colour fades to black to allow the user to more clearly read them.

Page | 27
The angle is based on the natural angle users hold the device at, as determined by the pre-
experiment research that was carried out.

When processing and analysing the results of the experiment, there needed to be some way
of extracting the data for the touch points and representing this graphically somehow. As
there was no pre-existing software to do this, a simple page was written in a combination of
JavaScript and PHP to extract this data and display it in a way where it could be easily
analysed. A canvas the same size as the device’s screen was created using HTML, and
then two JavaScript functions were created to illustrate the data on it. One function draws a
rectangle the size of the entire keyboard in the same position it would be on the screen, and
another takes the x and y coordinates recorded in the experiment and draws where these
were located within the keyboard. The PHP gets the x and y coordinates from the database
on the server, and then for each entry triggers the JavaScript function to draw at the same
point as the touch.
This was created with the purpose of testing the accuracy of the touch tracking that was
used in the experiment, but then later adapted for use after the experiment.

JavaScript was used to process the touches in real time during the experiment, this was
done by using a touchstart event. On the touchstart event being triggered, the clientX and
clientY of the touches were assigned to a variable, which was passed to a PHP file to handle
it and send it to the database using SQL. ClientX and clientY were used as they take the
coordinates of the touch on the viewport rather than the coordinates of the touch on the
page.

5.4 PHP

PHP was solely used in the experiment to transfer the data to the database using SQL. The
PHP was an external file that the data being processed was sent to using an
XMLHttpRequest.

This method was used instead of the local storage on the device as there was large amounts
of data being contained and the extraction and storage when using the device was not
guaranteed.

Page | 28
Using PHP session variables also allowed the accept ID that was assigned to participants at
the beginning to be passed from page to page during the experiment. This session variable
was accessed any time it was needed to assign the data being processed to an individual.

Each time the participant touched the keyboard a number of elements were logged in a table
using SQL contained within the PHP file. The x coordinate of their touch on the screen, the y
coordinate of their touch on the screen, the time they touched at, the design ID, and the
phrase ID.

The PHP file then ran the following SQL query in order to save the data to the database:

INSERT INTO `experiment1` (`entryID`, `accept_id`, `design_id`, `phrase_id`,


`touch_location_x`, `touch_location_y`) VALUES ('$entryID', '$accid', '$style',
'$phrase', '$x', '$y');

As they typed, the number of times their words were corrected using the correction algorithm
was recorded, as well as how many times they pressed the backspace. This was logged in
another table, along with the accept ID, the design ID, and the phrase ID.

It was processed in the same way as each touch was, sending the data to the PHP on the
server using an XMLHttpRequest, and executing the query:

INSERT INTO `corrections_exp1` (`entryID`,`accept_id`, `design_id`, `phrase_id`,


`alg_corrections`, `manual_corrections`) VALUES ('$entryID','$accid', '$dID' ,
'$pID' , '$alg', '$backspace');

When the typing of each style was finished, participants were asked their preference of style
and why, along with their hand size. This again was recorded in a table with the accept ID.

Page | 29
6 Verification and Validation

6.1 Ensuring project satisfies specification

To ensure each specification was being met, a sprint methodology was in place. Each sprint
a certain number of requirements were met to satisfy the conditions needed for each phase
of the keyboard to work. This also helped to produce modular and easy to read code.

6.2 Test Procedures

6.2.1 During Development


During development, each key was tested to ensure it performed the way it was intended (ie,
typing a letter).
The correction algorithm was tested separately before it was implemented in the word
correction function to ensure completely that it found the closest match to what was typed.
This was done by having a simple text input box that compared the input to the dictionary
and output the closest match according to the algorithm, like so:

Valid Input

Invalid Input Erroneous

This was thoroughly tested data that was valid, invalid, and erroneous, as seen above. The
algorithm handled the data as expected, and therefore works for the function required.

Page | 30
Code to input the touch points and outline of the keyboard onto a canvas was used for
testing accuracy of user touches before the first experiment was conducted. It allowed a
mapping of touches to keys being pressed to be visualised and gauge accuracy of the
mapping of the touch coordinates when rounded.

Figure 6: Testing accuracy of coordinate mapping

This was to ensure that the touch coordinates that were being recorded were an effective
indicator of where the user had pressed in regard to key location of the keys that were
pressed. From this image it is clear to see that even though the buttons were pressed, none
of them were in the centre of the keys, which is what would be being analysed when looking
at the results of the experiment. A more accurate, modified, version of this code was later
used in order to display the results of the experiment.

6.2.2 After Development

With the final version of the project further, more extensive testing was carried out to ensure
that there were no performance affecting bugs and to ensure robustness of the keyboard.
This was carried out using the testing framework gremlins.js to carry out “Monkey testing”.
The framework will randomly tap, click, drag, multitouch etc on the page and report any
errors that occur as a result.
An example of some of the standard output in the console while using this framework was:

Figure 7: Console log while running germlins.js

Page | 31
Figure 8: Screen while running germlins.js

Running this multiple times on the project provided sufficient evidence that the system
performed as it should under stress, with a high amount of aggressive interactions. The
framework will halt itself if more than 10 errors occur, and if no errors occur it will halt after
around one minute.

Germlins.js was run 10 times in total on the keyboard, and each time there were no errors
reported or detected manually while watching it run. During this time 1955 touches were
carried out (including multitouches and gestures), and 2168 movements (swipes and scrolls)
were carried out.

More information on the results of this testing can be found in Appendix C.

Due to the nature of this project being focused more on user interaction and
experimentation, the tests carried out are sufficient and they provide evidence of the project
working as intended, as well as being able to handle any unexpected or potentially
overwhelming interactions.

Originally it was planned to test the project on multiple tablet devices and verify the
performance on different operating systems and sizes of screens, but due to the outbreak of
COVID-19 this testing was not possible. This would have been evidence that the project was
portable.

Page | 32
7 Results and Evaluation

7.1 Final Outcome of Project

The final outcome of the project has achieved all the requirements that were decided upon.
A split keyboard that will assist users with typing peripherally through experimentation and
analysis has been achieved. It functions correctly as a keyboard should, and also
implements the features carried forward from the results of the experiment.

In the beginning of the project it was decided that two experiments would be conducted; one
to initially decide the final features and design that the keyboard should take, and then
another to evaluate the effectiveness of this keyboard against a standard split keyboard.
With the outbreak of COVID-19 only the first experiment was able to be conducted as a
lockdown was enforced. The experiment could not be carried out remotely, as each
participant had to use the same device to maintain consistency throughout. This meant that
the final evaluation of the performance of the keyboard could not be carried out, and the
experiment that did take place became the main focus of the project.

7.2 Experiment

This experiment was conducted with 18 participants in total. The results were analysed to
determine the final design of the keyboard. The next few sections detail the experiment; the
process, information gathered by it, and the subsequent analysis and interpretation that led
to the final design.

7.2.1 Intro to the Experiment

The aim of this experiment was to determine the optimal design of split tablet keyboard in
order to achieve the ideal speed/accuracy payoff when typing with peripheral vision, as well
as providing user feedback on each design.

Page | 33
Ethics approval for the experiment was granted by the Department of Computer and
Information Sciences Ethics Committee.

Having narrowed down the designs to three options to be presented to participants, the
hypothesis was that each of the designs would have their own strengths and weaknesses,
and that each of these would be reflected in the respective results and would determine what
elements of each design worked, and which did not. The experiment was performed by
having the participants sit in a quiet space and type with each keyboard using the provided
tablet. The participants then in their own time got familiar with the split layout of the keyboard
and then proceeded with the actual experiment at their own pace.
Eighteen participants in total were recruited. All participants were recruited on the basis of
having some experience typing with an onscreen keyboard and two thumb typing. All
participants gave consent to have their results used in this report.

7.2.2 Materials

The materials used in the experiment were the software that was created for the experiment
to record the participant's touches, the time they touched at, the number of errors that were
made during typing (both corrected and uncorrected), and their overall preference in terms of
style and why.

The phrases to be typed were taken from a list of memorable phrases from the Enron mobile
phrase set, and each phrase assigned a unique number to identify it during the experiment.
The experiment also used the same tablet device for each participant to ensure consistency
in performance, a Samsung Galaxy Note Pro SM-P900, 295.6 mm Height, 204 mm Width,
7.95 mm Diameter.

Participants were recruited by directly contacting them via email and social media, being
people with knowledge of typing using a touch screen and being familiar with a Qwerty
keyboard, as well as not suffering from any kind of reading/writing disability or any kind of
uncorrected vision issues.

Page | 34
7.2.3 Method

Each participant was randomly assigned an order for each keyboard to be presented in to
ensure that there was an even amount of variations in ordering. This ensured that each
keyboard design had an even amount of times being the last one participants typed with, to
avoid the problem of users getting better at typing with the keyboard the longer they use it.
Had this staggering not been done, the last one participants typed with would always appear
to perform better. The assigning was done by giving each order of variation a number from
1-6, and assigning this based on when participants were free to do the experiment.
Participants were asked to read both the participant information sheet and the briefing notes
for the experiment, they were then asked if they had any questions and assured that they
could ask questions at any point during the experiment, as well as take as many breaks as
needed or withdraw at any time without detriment.
Upon the experiment beginning, participants were shown the following screen:

Upon pressing the “I Agree” button, they were assigned a random number to be used as ID.
This meant that no name or personal information was saved, but allowed for differentiation
between participants. This was saved at each point where data was recorded.

They were then presented with two practice phrases to type using the standard design for
the split keyboard. During this time, no information was recorded and participants were
encouraged to type however they felt comfortable and not worry about how accurate or quick
they were being. When participants were ready, they were asked to begin the recorded

Page | 35
stage of the experiment, where all of their screen touches would be recorded, as well as the
time, and corrected and uncorrected errors they made.
They were instructed to type as quickly and accurately as possible while trying to keep their
eyes on the text output area and not the keyboard. If they spotted an error quickly they were
instructed to correct it using the backspace, but the caret (cursor) could not move to allow for
editing using it and they shouldn’t backspace over a lot of text (ie. if they finished the word
and realised there was an error, just to move on to the next one). They were also instructed
not to worry about punctuation or non-alphabetical symbols.

Each participant was asked to type with the three keyboards, and each keyboard had three
phrases to be typed. Participants were encouraged to take a break between each round of
typing on the keyboard to ensure they did not experience fatigue.

When all three keyboards had been used, the participant was asked their preference of
keyboard that they used, as well as any feedback they had for each style and why they
picked their preference that they did.

Participant’s hand measurements were also taken from the knuckle of their smallest finger to
the tip of the thumb whilst comfortably spread. This was in case there were any major
outliers in terms of accuracy as a result of the keys being unreachable by certain
participants, although none complained of this being an issue.

Further experiment data can be found in Appendix A, and full experiment instructions for
participants can be found in Appendix B.

Page | 36
7.2.4 Data

7.2.4.1 Typing Errors

Figure 7: Average corrections per style (algorithm and manual using backspace) with error
bars.

Here the number of corrections made by the correction algorithm have been plotted against
the amount of times the user pressed the backspace. Participants were instructed to press
the backspace if they noticed their mistake immediately after typing it, but if the word was
incorrect to continue typing and let the algorithm correct it. This tells the amount of corrected
errors and uncorrected errors.

This data shows us that the largest amount of user corrected errors (noticed while the
participant is typing the individual word) is highest for the dot buttons. This is to be expected
as users must be familiar with the location of where each key is located without easily being
able to glance down and see it, but the fact that the number of algorithm corrections is
roughly the same as the other styles implies that participants were well aware when they
make these errors, meaning they can be corrected quicker.

Page | 37
7.2.4.2 Touch Accuracy

Processing the participant touch locations whilst they were typing provided the following
patterns for each layout:

Fig 8: Mapping of touches for Dot Buttons

Fig 9: Mapping of touches for Coloured Keys

Fig 10: Mapping of touches for blurred keys

Page | 38
7.2.4.3 Typing Speed

Figure 11: Average Characters Per Second for each style with error bars.

This data was collected by recording each time the participant pressed the keyboard and the
mean was taken for each phrase. This mean was then used to calculate the overall mean of
each keyboard style, which is shown in Figure 7. This also shows the 95% confidence
interval for each mean.
The results in this graph show that the dot buttons style are much slower than both coloured
keys (p≈0.002, pairwise t-test with Bonferroni correction of significance level) and
blurring(p≈0.006). The significance level being under 0.0167 indicate these results are
significant, but the difference between the coloured keys and the blurring style is p≈0.804,
this indicates no significant difference between the two.

Page | 39
7.2.4.4 User Preference

Figure 6: Total votes for each style


After trying all the different styles for typing the participants were asked their overall general
preference and why. The purpose of this was to determine which style was consciously
easier for them to use and determine what features they found most useful and why.

This data shows a clear preference for the coloured keys style. When giving feedback
participants generally preferred when they had the option to easily see the letters on the
keyboard without effort hence why blurring and coloured keys are more popular than the dot
buttons. However, most who voted for the colour keys also admitted that the dots style
forced them to type peripherally and therefore better met the requirements of the project.
Participants also said that the coloured keys added a visual aid for key locations, and having
each column a different distinct colour further aided in differentiating between keys when not
focused on them. Some said that the lettering on the coloured keys was harder to read as it
was smaller, so for that reason it makes sense to combine the coloured keys and this styles
aid in differentiating each key with the feature of the dot buttons where the user can tilt the
device to see the keys whilst not having them directly on screen.

Users also said that the highlighted home keys did not actually help them differentiate, as
many of them were not consciously aware of what letters were on the middle row.

Page | 40
One participant complained that the blurring effect was “migraine inducing”, which is a
concerning factor for usability, and suggests that the blurring style should be modified if
carried forward. The participant did not experience a migraine during the experiment but said
that if asked to type for a long period of time with the blurring one would likely be triggered.

7.2.3 Discussion

From all this analysis it is evident that each of the styles has various strengths and
weaknesses when it comes to assisting with peripheral typing.

With typing errors, overall coloured keys have the least amount of algorithm corrections, this
implies that participants are again able to correct the letters they notice are wrong before
they have finished typing the word. (See Figure 7)

Looking at just algorithm corrections (and therefore unnoticed errors) we can see that both
dot buttons and coloured keys have less unnoticed errors than blurring, and blurring has the
lowest manual corrections. From this we can deduce that a user is less likely to notice they
have made a spelling error whilst typing, therefore are less focused on what they are actually
typing. This is most likely due to the fact that the blurring is happening outside their control,
the blurring animation potentially drawing their eyes to the movement and away from the text
output area. The research that our peripheral vision is more aware of objects that change
was intended to improve the users awareness of what they are typing, but it appears this has
not been effective in this instance. Coloured keys perform the best when considering this,
because less algorithm corrections and more manual corrections means more noticed errors
than unnoticed.

This again reinforces much of the prior research into peripheral technologies, where if users
are having their focus switched between the task they are carrying out and the means of
doing it, they perform worse than if they were allowed to completely focus on the task. From
this we can concur that coloured keys are more effective for keeping the user's focus on the
task of typing.

Looking at all the mapping results in the touch accuracy section overall we can see that on
the left-hand side of the keyboard participant’s performed with a generally better accuracy
than on the right hand side, where more scattering occurs. This may be due to the fact that
the text they were required to type, and the text output area were both located on the left

Page | 41
hand side of the screen, where most people would type from. This meant that whilst keeping
their eyes fixed on this area the left-hand side would have been directly in their lower
periphery, while the right hand side would have been further out. (See Figures 8, 9, 10).

Figure 8 is the mapping of the dot buttons. Looking at this figure it appears that the majority
of touches were located within the diameters of the keys, with a higher concentration of
these being close to the centre of the keys, this suggests that while there was a greater
amount of corrected errors using this style of keyboard, the participants were able to hit the
keys themselves accurately, just not the intended keys. This is most likely due to the letters
not being on the keys, but participants expressed that this did encourage them to type using
their peripheral vision more. This is because adding the extra step of tilting to view the letters
discouraged the participants from just glancing down and reading the keys when they were
considering their next key to press, and instead required them to rely on the muscle memory
of typing on other devices like mobile phones, similar to how one would navigate a physical
keyboard that they were unused to. Interestingly, if we look at the presses over the lighter
highlighted home keys- particularly on the right-hand side where the user was using their
periphery more, these appear to be less accurate than the darker red keys. This reinforces
the research of colours appearing more vivid in the user’s peripheral vision, as paler colours
are harder to define from the white background. The use of colour within this keyboard was
addressed more with the coloured buttons style, the accuracy results of this shown in Figure
9.

Figure 9 is the mapping of the coloured keys. These keys were slightly smaller than the
other styles, more similar to the standard keyboard but with only the colours changed.
Compared to the dot buttons we can see a clearer scattering around the diameter of the
buttons, indicating that the keys being smaller was an issue for participants, and it was
expressed in the feedback that having the extra space around the keys made it more difficult
to type accurately. This tells us that the keys either need to be closer together or made
larger to reduce the space between them. Combined with the fact that this style has the
lowest number of algorithm corrections (unnoticed errors), participants expressed that this
style was the favourite to type with as they could both easily see at a glance where each key
was in each column and tell each section apart when using their peripheral vision. They also
said this was the best one as it had the letters displayed, although this did result in them
relying less on their peripheral vision.

Page | 42
Figure 10 shows the spread of touches on the keyboard where the keys would blur and
unblur periodically. Almost all these keys have a large spread both within and without the
diameter of the keys, with many keys not having any presses in the centre. This combined
with the analysis of both the unnoticed and noticed errors, with unnoticed errors being higher
than any other style, shows that out of the three styles blurring is the least accurate for
typing with. Participant feedback from those that said this was their preferred style said that
they preferred it because the letters were the most visible at a glance. While this assists with
the notion of glancing quickly down to orient oneself with the keyboard during typing, it does
little to improve the accuracy of typing. Again, it reinforces the research that if users are
shifting their focus between the task they are trying to accomplish and the means of carrying
out the task, the output of the task is then of poorer quality as a result. There is especially
more scattering around the spacebar with this style, whereas with the other styles there is
more scattering only below this key. This could because the spacebar is less clearly defined
and separate from the other keys, the blurring effect creating even less of clarity on the lines
of the buttons.

In terms of speed both coloured keys and blurred keys performed the best. (See Figure 11)
It could be interpreted that the increased speed of the coloured and blurred keys has
something to do with participants being able to make out the shape of the letter when the
keys are blurred, allowing them to quickly determine the location of the keys that they are
about to type. This accuracy is reduced due to the unclear edges of the keys and the shifting
in and out of focus of the keyboard.

This is further supported by that fact that while the coloured keys have the same average
typing speed, the accuracy of the touches is better, as well as having the least amount of
uncorrected errors. This reinforces the research from the Typing on Split Keyboards with
Peripheral Vision paper (Lu et al, 2019), where they observed that accuracy of typing is
sacrificed for faster typing speeds. The theory that users can quickly work out what letter
they are typing next because they can see them at a glance is emphasised by the fact that
the dot button style has the slowest average typing speed, being unable to recognise each
key at a glance makes it harder for users to type quicker. Instead, forcing the user to tilt the
screen to see the letter means that this action significantly slows them down, but improves
their accuracy as seen in the touch mapping.

From this we can deduce that colour separating keys can improve accuracy in terms of both
errors and locating the correct keys. The shape and size of the dot buttons allows for optimal

Page | 43
location accuracy whilst typing, and the features that allow for encouraging peripheral typing
are most effectively shown with this style, however not showing the letters at all does provide
some issue for users who are less familiar with key locations. This issue is resolved with the
blurring, and while the switching between blurring and deblurring reduced the accuracy of
the actual typing, the slightly visible shape of the letters was enough to aid peripheral vision
at a glance, same as having them smaller yet still visible on the coloured keys.

Despite the blurring style being equally as fast as the coloured keys, it’s decreased accuracy
and having more unnoticed errors than the coloured keys rules it out as an option. This is
also the best choice, as user comfort is also paramount and the suggestion that the blurring
effect could induce a migraine is too concerning to gloss over.

This is why sampling the best features from each design has created the most effective
keyboard for peripheral typing, each of them has their own strengths that could be optimised.

7.3 Final Evaluation of Design Created

Due to the outbreak of the COVID-19 virus and the subsequent isolation that occurred as a
result the final evaluation was unable to be completed. This final evaluation would have been
a control experiment and would have determined the keyboards effectiveness on a whole, as
well as provided valuable data into whether or not the objective to improve the speed and
accuracy of users typing with peripheral vision was met.

In the original project plan, the final design of the keyboard was to be compared to a split
keyboard without the design applied, as well as a standard non-split keyboard. This would
then monitor the same things as the first experiment, namely speed and accuracy. It would
have determined the effectiveness of the keyboard.

This was frustrating, as a portion of the time during development was spent creating this
experiment as well as processing things like ethics approval and recruiting participants. This
was time that could have been spent on developing more features for the keyboard, perhaps
further adapting the word correction algorithm as said earlier in the report.

Page | 44
Aside from this, the final design is based on both experiment data as well as user feedback.
It meets the project specifications in regards to creating a prototype split keyboard that
assists with peripheral typing through a combination of discouraging users to look at the
keys and assisting their periphery in distinguishing between different keys. Therefore, it is
fair to say that this project has been largely a success.

7.4 Lessons Learnt

The biggest lesson learnt from this project is that often when doing experiments, the results
will not necessarily be distinct or obvious and to draw conclusions much interpretation of
data has to be done. Often these interpretations will be different for different people, and
that’s why there is such a strong importance when it comes to being able to validate
arguments in situations like this.

Also, a skill that was learned specifically for this project was performing the analysis itself on
the large amounts of data collected during the experiment. This was a critical task to
accomplish as it provided the information required to create the final product.

Finally, the main lesson learnt was that even if you plan and stick to it, sometimes you will be
thrown completely unexpected curveballs, like a global pandemic that stops you being able
to fully complete your evaluation.

Page | 45
8 Summary and Conclusion

8.1 Success of Objectives

This project has overall achieved great success in achieving its goal of creating a split
keyboard to encourage peripheral typing in tablet computer devices.

The experiment had a large enough and varied sample size to ensure that there was no bias
when it came to gender/age/computer literacy etc, meaning it was a valuable means of
coming to a design conclusion. Once each participant got used to using the keyboard, they
found it useful and gave largely positive feedback, preferring at least one of the styles over
the standard keyboard when encouraged to use their peripheral vision.

8.2 Issues that Occurred

Some problems that occurred were mainly down to the limitations of the tablet device that
was used for the experiments, as some effects and features were simply not possible. On
other mobile devices these features worked, but due to the capabilities of the device some
JavaScript did not work as intended and therefore some features had to be adapted as a
result.

There was also the issue of the human limitations with the experiment. Ideally there would
have been a wider range of styles that were shown to subjects in the original experiment to
provide a wider array of preference and features. This was not possible when keeping the
comfort and wellness of the subjects in mind, as typing for long periods of time even with
breaks proved to be difficult for some people, namely due to the size and weight of the tablet
that was being used in the experiment. Any more time spent on the experiment may also
have started to affect the results of the experiment as subjects may have experienced
fatigue or eye strain from looking at the screen for so long, resulting in slower and less
accurate typing. As well as this, some of the participants complained that the device being

Page | 46
used to conduct the experiment was too heavy, so using a smaller or lighter would have
been done given the option.

As mentioned in the evaluation section, much of the evaluation was not able to be carried
out due to the outbreak of the COVID-19 virus. This outbreak prevented any other
experimentation from being carried out, meaning that any time working on the second
experiment was rendered invalid. Instead, time could have been spent on other features of
development, or some more in-depth analysis of the results of the experiment that did go
ahead. This outbreak also limited testing, as other tablet devices could not be accessed to
determine the effectiveness of the project on these.

8.3 Further and future development

8.3.1 Evaluation Experiment

The main area for future development would be to actually carry out the final evaluation
experiment and determine if the final design truly does facilitate peripheral typing better than
a normal split keyboard.

This would provide evidence that this keyboard is a viable option for those who use a split
keyboard with their tablet device, and provide them with an alternative that would be proven
to reduce the eye and neck fatigue often experienced.

8.3.2 Word Correction Algorithm

The word correction algorithm could also be improved to further include deletions and
insertions, rather than just substitutions. Doing this would create a more effective algorithm,
as it would also check for missed letters and accidentally added letters.

Further, having a means to add words to the dictionary file if there is a word that is not
included in the file but continues to be typed, implying it is a correct word.

Common mistypes and spelling errors also could be dealt with, rather than solely relying on
the Levenshtein distance

Page | 47
8.3.3 Further Additional Characters

More non-English characters could be added to the keyboard, and more punctuation
characters. As said in section 4.4, the characters currently present in the keyboard are only
a sample of characters to illustrate the positioning, not including things like emojis.

8.3.4 Creating Keyboard

Depending on the results of the final experiment, it would also be good to actually implement
the keyboard design as a full working keyboard, that can be used with other apps. This
would be done in naive code and would allow for more potential in terms of areas of things
like user customisation etc.

8.4 Final Conclusion

To conclude, the research leading to the experiment, which led to the final design that was
implemented have all been a great success.

Aside from the final evaluation experiment, there have been no major drawbacks that could
not be overcome, and this is my conclusion that my project was good.

Page | 48
9 References
Edge, D., Blackwell, A.F., 2016, Peripheral Interaction: Challenges and Opportunities for HCI in
the Periphery of Attention, Chapter 4: Peripheral Tangible Interaction, Springer International
Publishing Switzerland

Hitzel, E., 2015, Effects of Peripheral Eye on Movements: A Virtual Reality Study on Gaze
Allocation in Naturalistic Tasks, Springer Fachmedien Wiesbaden

Husk, J.S., Yu.D, 2017, Learning to recognize letters in the periphery: Effects of repeated
exposure, letter frequency, and letter complexity. Journal of Vision 2017;17(3):3. doi:
https://doi.org/10.1167/17.3.3.

Kristensson, P.O, Vertanen, K., 2014. The inviscid text entry rate and its application as a grand
goal for mobile text entry. In Proceedings of the 16th international conference on Human-
computer interaction with mobile devices & services (MobileHCI '14). ACM, New York, NY, USA,
335-338. DOI: https://doi.org/10.1145/2628363.2628405

Laptops Vs. Tablets: Pros and Cons, Lenovo, Available from:


https://www.lenovo.com/gb/en/faqs/laptop-faqs/laptop-vs-tablet/ [Accessed 19 November 2019]

Lu, Yiqin, Chun Yu, Shuyi Fan, Xiaojun Bi, and Yuanchun Shi. 2019. “Typing on Split Keyboards
with Peripheral Vision.” In Proceedings of the 2019 CHI Conference on Human Factors in
Computing Systems, 200:1–200:12. CHI ’19. New York, NY, USA: ACM

Oulasvirta, A., Reichel, A., Li, W., Zhang, Y., Bachynskyi,M., Vertanen, K., Kristensson, P.O.
2013. Improving two-thumb text entry on touchscreen devices. In Proceedings of the SIGCHI
Conference on Human Factors in Computing Systems (CHI '13). ACM, New York, NY, USA,
2765-2774. DOI: https://doi.org/10.1145/2470654.2481383

To, MPS., Regan, BC., Wood, D., Mollon, JD., 2011, 'Vision out of the corner of the eye', Vision
Research, vol. 51, no. 1, pp. 203-214

Tyler, C. W. (2015) ‘Peripheral Color Demo’, i-Perception. doi: 10.1177/2041669515613671.

Page | 49
Weiser, M., 1991, The Computer for the 21st Century, Scientific American Special Issue on
Communications, Computers, and Networks, Available from:
http://www.ubiq.com/hypertext/weiser/SciAmDraft3.html, (accessed via)
https://web.archive.org/web/20141022035044/http://www.ubiq.com/hypertext/weiser/SciAmDraft3
.html [Accessed 16 November 2019]

Page | 50
10 Appendix A- Experiment Results Data

10.1 Sample of Touch Data

Sample of touch data from experiment (full file included with code due to large size))

Page | 51
10.2 Sample of Errors Per Phrase

Sample of errors per phrase data from experiment (full file included with code due to large
size)

Page | 52
10.3 Participant Preference

Preference and hand size of all 18 participants

Page | 53
11 Appendix B- Experiment Further Details

11.1 Participant Instructions

11.2 Phrases Typed (Including Practice Phrases)

Page | 54
12 Appendix C – Detailed Test Results
Stress testing was performed using the framework gremlins.js. It displayed a console output
of its actions whilst running.

Sample of console output of germlins.js (full file included with code due to large size)

Page | 55
13 Appendix D - User Guide

13.1 Prerequisites

Before running the “Peripheral Tablet Keyboard” project it is important to ensure the device
accessing it meets the following requirements:
• Is a touch screen enabled device
• Ideally has a screen size of 12.2 in (310 mm) diagonal
• Is Wi-Fi enabled
• Chrome version 80.0.3987.149 (or higher)

13.2 Accessing

To access the project simply go to the following address in Chrome:

https://devweb2019.cis.strath.ac.uk/~rnb16141/keyboard/final/finaldesign.html

And through the Chrome menu, add the page to the device home screen as a webapp.

Once a shortcut icon has appeared on the home screen, press this to open the keyboard
app.

13.3 Use

Holding the device by the bottom two corners while in horizontal view, the keyboard should
be reachable with ease by the user’s thumbs.

Pressing either the numbers or special characters buttons should reveal a pop-up menu on
the respective corner that can be used at any time to type with and closed by pressing the ‘x’
in the corner closest to the edge.

Page | 56
To type with the keyboard, hold the device at a comfortable distance and look directly at the
text output area with the keyboard in the periphery. To see the letters in more detail, tilt the
device backwards until the letters darken and are easier to see.

To move the cursor, use the blue arrow keys located above the keyboard on the left hand
side. This allows the cursor to be moved within the typed string. The cursor location
determines where the output from the keys will be typed.

Spelling is corrected when the system cannot find a match within the dictionary file. It
chooses the closest matched word it can find with the least amount of changes required to
match the two. If the word typed is incorrect but is not replaced by the algorithm, the word
typed does not have a close enough match within the dictionary file.

The shift key is active automatically when the user first starts typing and can be pressed to
allow for the next letter to be typed to be capitalised. It will not change the letters on the keys
but will type the correct case of letter in the output area.

Page | 57

You might also like