Report

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 13

UI Design

Table of Contents

Introduction...........................................................................................................................................4
Problem Statement............................................................................................................................4
Scenario Map.........................................................................................................................................5
Ideation:................................................................................................................................................7
Digital Prototype:...................................................................................................................................9
Conclusion:..........................................................................................................................................11
References...........................................................................................................................................13
Table of Contents

Figure 1: Scenario Map.........................................................................................................................5


Figure 2: Prototype..............................................................................................................................10
Introduction

It is impossible to overestimate the significance of user interface (UI) design in the continuously

changing world of mobile technology. The effectiveness and user-friendliness of touch-based

interfaces play a crucial role in boosting user experiences as mobile devices become an essential part

of our daily lives. CSE2UI Assignment One is presented by the Department of Computer Science and

Information Technology to research and demonstrate a thorough comprehension of UI design.

(Rustambek, 2023).

Problem Statement

How to make gesture keyboards on touchscreen mobile devices more functional is the main issue this

assignment attempts to answer. Users can enter text using gesture keyboards, a cutting-edge mode of

input, by tracing a path across the required letters on a virtual keyboard. One might utilise the key,

slide to the 'e,' 't,' 'h,' 'o,' and 'd,' then lift their finger to spell the word "method," for example. The

computer reads this gesture and selects the most likely word, which in this case is "method," which is

then typed.

Users routinely take actions outside of text input in typical settings, such as modifying text (e.g., copy,

paste, delete) or launching application features (e.g., "save"). Traditionally, menus have made it easier

to perform these tasks. Menu implementation in touchscreen mobile devices, though offers difficulties

because users must tap menu items with their fingers, which is difficult on small screens and takes up

valuable screen space (Leno, 2021).

The main issue at hand is how to improve the gesture keyboard paradigm such that text entry and

command activation are smoothly accommodated inside a single system on touchscreen mobile

devices. The investigation of original solutions to this intriguing challenge is the focus of this project.

We will set off on a trip as we work through this project which includes building a scenario, coming

up with creative design concepts, creating a digital prototype, generating a video tour, and writing a
thorough report. Each of these assignments will help students develop a comprehensive understanding

of UI design principles and how to use them practically to solve problems in the real world.

Through this project, we hope to provide students with the knowledge and abilities necessary to

address challenging UI design issues, encouraging creative solutions that improve the touchscreen

mobile devices' usability and functionality.

Scenario Map

Link to Figma Whiteboard: https://www.figma.com/file/1W9jvN10IS5hrvR48nVSbM/Scenario-

Map?type=whiteboard&node-id=0%3A1&t=AKB9zoJ6YVQ2wb5q-1

Figure 1: Scenario Map

Comments and Questions in Scenario Map:

1. User Action: The user starts the gesture keyboard on their touchscreen mobile device to enter

text.

 Comment: At this point, the user is getting ready to enter text using the gesture keyboard.

How can we smoothly switch to command input, though, if necessary?

 Question: How can we distinguish between command input gestures and standard text input

gestures (Cheng, 2021)??

2. Text Entry: By using gesture input, the user selects individual letters to begin entering text.
 Comment: It's crucial to provide an easy-to-use text entry method.

 Question: Are there any particular difficulties that users can encounter while typing lengthier

words or phrases?

3. Text Editing: The user encounters a scenario where they need to perform text editing tasks, such

as copying, pasting, or undoing text changes.

 Comment: Text editing is a common task, but it might require distinct gestures or actions to

avoid confusion with text entry.

 Question: How can we incorporate text editing commands without disrupting the text entry

flow?

4. Command Activation: A command, such as "save," must be activated by the user while using the

gesture keyboard.

 Comment: Activating instructions ought to be simple and different from entering text.

 Question: What action or gesture can be utilised to unambiguously activate command

input??

5. Feedback: By identifying gestures and carrying out the appropriate tasks, the system delivers

feedback.

 Comments: Feedback is essential to let the user know that their motions have been

recognised.

 Question: How can we make sure that the user receives rapid, unambiguous feedback

regarding their input?

6. Command Execution: The selected command, in this case, "save," is executed by the system.

 Comment: It's important that the system accurately executes the intended command.

 Question: How can we prevent unintentional command execution due to misinterpretation

of gestures?
7. User Experience: Throughout the scenario, maintaining a positive and efficient user experience is

paramount.

 Comment: User experience should be at the forefront of our design considerations.

 Question: What steps can be taken to optimize the overall user experience when using the

gesture keyboard for both text and command input?

The scenario map graphically depicts the user's trip through the gesture keyboard interface,

emphasising significant locations where command activation and text input collide. As stated in the

assignment's objectives, this map serves as a starting point for our investigation of creative ways to

expand the gesture keyboard paradigm.

Ideation:

In the pursuit of addressing the central question of this assignment, which revolves around enabling

command input on the gesture keyboard while maintaining the integrity of text input, it becomes

essential to explore and articulate innovative ideas and concepts. The following section outlines

potential solutions and concepts aimed at distinguishing between text input and command input

within the gesture keyboard paradigm.

1. Gesture Differentiation:

 Concept: Introduce a distinct gesture or motion pattern that signifies the initiation of

command input. For instance, a double tap or a circular motion could indicate the

transition from text input mode to command input mode (Mackamul, 2023).

 Advantages: This method reduces the possibility of user confusion between text input and

command input by relying on natural movements that users can easily adapt to.

2. Gesture Triggers:

 Concept: Give recognizable motions to frequently used instructions. As an illustration, a

swipe upward could mean "save," while a swipe downward means "delete."
 Advantages: Users can easily activate commands without having to explicitly switch

modes by connecting gestures with actions, improving the engagement process.

3. Contextual Gestures:

 Concept: Context-aware gestures that consider the user's present activity or application

should be used. Depending on whether the user is in a text editor, email client, or web

browser, the same gesture could activate various instructions.

 Advantages: This approach enhances the versatility of gesture commands, allowing users

to perform context-specific actions effortlessly.

4. Gesture Combinations:

 Concept: Combine a series of gestures to execute complex commands. For instance, a

double tap followed by a letter gesture could trigger a "save as" function (Reed, 2023).

 Advantages: By using combinations, users can access a wide range of commands without

cluttering the interface with numerous individual gestures.

5. Visual Feedback:

 Concept: Display visual cues on the screen to indicate the mode the gesture keyboard is

in. For instance, when in text input mode, the virtual keyboard layout is visible, while

command mode might display icons or symbols related to common commands.

 Advantages: Visual feedback provides users with clear indications of the current mode,

reducing the likelihood of unintentional command execution.

6. Voice Commands:

 Concept: Incorporate voice recognition alongside gesture input. Users can trigger

command input by speaking specific keywords or phrases, enhancing the versatility of the

system (Jin, 2022).


 Advantages: Voice commands offer an alternative and hands-free method of input,

allowing users to seamlessly switch between text and command input.

7. User Customization:

 Concept: Allow users to customize gestures and their associated commands, tailoring the

gesture keyboard to their specific preferences and needs.

 Advantages: Personalization empowers users to define their preferred command input

methods, making the system highly adaptable.

The difficulty of differentiating between text input and command input within the motion keyboard

paradigm is addressed differently by each of these concepts. Depending on elements like user

preferences, context, and the complexity of the mobile application, the option of which notion to

implement may change. These ideas serve as a basis for the later creation of a digital prototype, where

one of these ideas will be chosen and further developed to show how it might be used in practice.

Digital Prototype:

We demonstrate one of the suggested ideas from the ideation stage in this section with a digital

prototype created using the Figma design platform. The concept of distinguishing between text input

and command input on the gesture keyboard for touchscreen mobile devices is demonstrated by the

prototype.

Link to Figma Digital Prototype:

https://www.figma.com/file/KiqYeZZp8WD7tQERK4wIhC/Keyboards-(Community)?

type=design&node-id=0%3A1&mode=design&t=y3gGOsMcwRr7GVpw-1
Figure 2: Prototype

Description of the Digital Prototype:

Our approach to the problem of seamlessly combining command input with text input in the motion

keyboard paradigm is made real via the Figma digital prototype. It is made to look and feel like using

a touchscreen mobile device.

Key Features and Functionalities Demonstrated in the Prototype:

1. Gesture Differentiation: The prototype shows how a particular motion—like a double-tap—is

used to start command input mode.

2. Text Entry: Using the conventional text input mode, users can see how the gesture keyboard

works. By dragging their finger around the virtual keyboard layout, they may enter text.

3. Command Activation: The prototype illustrates how specific gestures or groups of gestures can

activate different commands. For instance, the "save" command is activated by a swipe gesture,

and the "undo" command is started by a circular motion.

4. Visual Feedback: Users can better grasp which mode they are in (text input or command input)

when there are obvious visual cues on the screen, as mentioned in the ideation section.
5. User Interaction: People can engage with the digital prototype to see how the suggested solution

improves touchscreen mobile devices' user interface.

Video Walkthrough Description:

The Figma digital prototype is enhanced by the video walkthrough, which offers an interactive display

of its behaviour and capabilities. Users can follow the step-by-step instructions to learn how to switch

between text input and command input modes and how to utilise gestures to carry out different

commands.

Conclusion:

Together, the digital prototype and video walkthrough demonstrate our creative response to the

problem practically posed by this task. Users can learn more about how the proposed concept

distinguishes between text and command input on gesture keyboards through these resources, which

will ultimately improve the usability and effectiveness of touchscreen mobile devices.

The video walkthrough and Figma prototype are crucial in helping to visualise the real-world use of

our technology and its potential user advantages. They offer a practical experience that supplements

ideation and the scenario map and exemplifies how our design notion manifests in the actual world.

Conclusion:

Finally, the major problem of this assignment has been to extend the gesture keyboard paradigm to

smoothly accommodate both text entry and command activation on touchscreen mobile devices. We

have attempted to offer creative answers to this challenging challenge through an organised method

that includes scenario mapping, brainstorming, digital prototyping, and a video walkthrough.

This assignment's main issue was the increased demand for user interface optimisation for

touchscreen mobile devices. The difficult part was enabling users to switch between text entry and

command activation without sacrificing the effectiveness and usability of the gesture keyboard.
We put forth ideas using gesture differentiation, gesture triggers, contextual gestures, gesture

combinations, visual feedback, and voice instructions, drawing from a variety of innovative thoughts.

These ideas sought to establish a distinct and understandable line between text input and command

input on the gesture keyboard.

We visualised the user journey using our scenario map, finding crucial touchpoints where text and

command input collide. The ideation segment expanded on these touchpoints by showcasing fresh

ideas that might make gesture keyboards on touchscreen mobile devices more useful.

Our Figma-hosted digital prototype gave a real-world example of one of our suggested solutions, and

the walkthrough movie that accompanied it served as a companion to it. Using motions, this prototype

demonstrated how users could easily transition between text input and command activation modes

while still enjoying a seamless and simple user interface.

The investigation of UI design ideas in the context of this project has been the current mobile

technology scenario. It emphasised the significance of user-centric design, where the effectiveness

and experience of the user are crucial. To help people use touchscreen mobile devices more smoothly

and in control, we think our suggested solutions are a step in the right direction.
References

Cheng, Y. L. G. Y. M. J. D. Y. J. L. Y. L. Y. a. C. D., 2021. Gesture recognition based on surface

electromyography‐feature image. Concurrency and Computation: Practice and Experience, Volume

33(6), p. e6051.

Jin, Y. G. Y. X. X. C. S. L. J. L. F. L. Z. a. J. Z., 2022. EarCommand: " Hearing" Your Silent Speech

Commands In Ear. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous

Technologies, Volume 6(2), pp. 1-28.

Leno, V. P. A. D. M. L. R. M. a. M. F., 2021. Robotic process mining: vision and challenges. Business

& Information Systems Engineering, Volume 63, pp. 301-314.

Mackamul, E. C. G. a. M. S., 2023. Exploring visual signifier characteristics to improve the

perception of affordances of in-place touch inputs. Proceedings of the ACM on Human-Computer

Interaction, Volume 7(MHCI), pp. 1-32.

Reed, R., 2023. Seniors Guide to iPhone and Apple Watch: 2 in 1: The Definitive and Intuitive Step-

by-Step Manual to Master your New iPhone and Apple Watch with Tips and Tricks for Senior

Beginner Users. Blue Nile Publishing LLC.

Rustambek, M., 2023. THE ROLE OF WEB PROGRAMMING IN TODAY'S PROGRAMMING

WORLD.. World Scientific Research Journal, Volume 16(1), pp. 43-50.

You might also like