Download as pdf or txt
Download as pdf or txt
You are on page 1of 41

A

Seminar Report
On

‘Touchless Screen’

Presented by
Abhishek Bagate
Shivani Bacchewar

TE[CSE-I]
Computer Science and Engineering
2018-2019
Guided By
Dr. Mrs. M.Y. Joshi
(Department of Computer Science and Engineering)
Submitted to

MGM’s College of Engineering, Nanded


Under

Swami Ramanand Teerth Marathwada University, Nanded


Certificate
This is to certify that the report entitled

‘Touchless Screen ’
Submitted By

Abhishek Bagate
Shivani Bacchewar
in satisfactory manner as a partial fulfillment of

TE [CSE-I] in Computer Science and Engineering

To

MGM’s College of Engineering, Nanded


Under

Swami Ramanand Teerth Marathwada University, Nanded


Has been carried out under my guidance,

Dr. Mrs. M.Y. Joshi

Guide

Dr. Mrs. Rajurkar A. M. Dr. Mrs. Lathkar G. S.

Head Director

Dept. of Computer Science & Engg. MGM’s College of Engineering, Nanded


ACKNOWLEDGEMENT

We are greatly indebted to our seminar guide Dr. Mrs. M.Y. Joshi for her able guidance
throughout this work. It has been an altogether different experience to work with her and
we would like to thank her for her help, suggestions and numerous discussions.
We gladly take this opportunity to thank Dr. Mrs. Rajurkar A. M. (Head of
Computer Science & Engineering, MGM’s College of Engineering, Nanded).
We are heartily thankful to Dr. Mrs. Lathkar G. S. (Director, MGM’s College of
Engineering, Nanded) for providing facility during progress of seminar; also for her
kindly help, guidance and inspiration.
Last but not least we are also thankful to all those who help directly or indirectly to
develop this seminar and complete it successfully.

With Deep Reverence,

Abhishek Bagate
Shivani Bacchewar
[TECSE-I]

I
ABSTRACT

It was the touch screens which initially created great furore.Gone are the days when you
have to fiddle with the touch screens and end scratching up. Touch screen displays are
ubiquitous worldwide. Frequent touching a touchscreen display with a pointing device
such as a finger can result in the gradual de-sensitization of the touchscreen to input and
can ultimately lead to failure of the touchscreen. To avoid this a simple user interface for
Touchless control of electrically operated equipment is being developed. Elliptic Labs
innovative technology lets you control your gadgets like Computers, MP3 players or
mobile phones without touching them. A simple user interface for Touchless control of
electrically operated equipment. Unlike other systems which depend on distance to the
sensor or sensor selection this system depends on hand and or finger motions, a hand
wave in a certain direction, or a flick of the hand in one area, or holding the hand in one
area or pointing with one finger for example. The device is based on optical pattern
recognition using a solid state optical matrix sensor with a lens to detect hand motions.
This sensor is then connected to a digital image processor, which interprets the patterns
of motion and outputs the results as signals to control fixtures, appliances, machinery, or
any device controllable through electrical signals.

II
CONTENTS

CHAPTER TITLE PAGE NO


NO
ACKNOWLEDGEMENT I

ABSTRACT II

CONTENTS III

LIST OF FIGURES IV

1 HISTORY 1

1.1 The Decade of Touch 1

1.2 Touch Screen for everyone 2

1.3 2000’s and Beyond 3

1.4 Wave front gesture based portfolio wall 3

1.5 Mutaual Capacitive sensing in sony’s smartskin 4

1.6 Failed tablets & Microsoft research touch light 7

2 INTRODUCTION 8

2.1.1 Touch less monitor 10

III
CHAPTER TITLE PAGE NO
NO
2.1.2 Working 11

2.1.3 Touch-less Gives Glimpse of GBUI 12

2.1.4 The GBUI as seen in the Matrix 12

3 Touch-less UI 13

3.1 Touch-less SDK 14

3.2 Window-shop pic 15

4 Touch wall 16

5 Technologies 17

5.1 Resistive touchscreen 18

5.2 Surface acoustic wave 19

5.3 Capacitive sensing 19

5.4 Surface capacitance 20

5.5 Mutual capacitance 24

5.6 Self-capacitance 25

5.6.1 Use of styli on capacitive screens 25

IV
CHAPTER TITLE PAGE NO
NO
5.8 Infrared acrylic projection 27

5.9 Dispersive signal technology 27

5.10 Acoustic pulse recognition 28

6 Ergonomics and usage 28

6.1 Touchscreen accuracy 28

6.2 Hand position, digit used and switching 29

6.3 Combined with haptics 30

6.4 Gorilla arm 30

6.5 Fingerprints 31

6.6 ADVANTAGES 31

6.7 DISADVANTAGES 32

7 CONCLUSION 32

8 REFERENCES 33

V
LIST OF FIGURES

FIGURE TITLE PAGE


NO. NO.
1 Original Message pad 100 2

2 Gesture pad C.2000’s and beyond 3

3 Wavefront’s gesture-based Portfolio Wall 4

4 Smart Skin sensed gestures 4

5 Touching to display 6

6 Touchless Monitor 10

7 Touch-less UI Device 11

8 Touch-less UI Device 12

9 Window –Shop Pic 13

10 Capacitive 16

11 Capacitive touchscreen of a mobile phone 16

12 Back side of a Multitouch Globe, based on 18


projected capacitive touch (PCT) technology

13 8 x 8 projected capacitance touchscreen 19


manufactured using 25 micron insulation
coated copper wire embedded in a clear
polyester film.

14 Schema of projected-capacitive touchscreen 19

15 Infrared grid 22

16 Fingerprints and smudges on a iPad 27


touchscreen

VI
VII
Chapter 1
HISTORY [1]

1.1 0’s: The decade of touch


In 1982, the first human-controlled multi touch device was developed at the University
of Toronto by Nimish Mehta. It wasn't so much a touch screen as it was a touch-tablet.
The Input Research Group at the university figured out that a frosted-glass panel with a
camera behind it could detect action as it recognized the different "black spots" showing
up on-screen. Bill Buxton has played a huge role in the development of multi touch
technology. The touch surface was a translucent plastic filter mounted over a sheet of
glass, side-lit by a fluorescent lamp. A video camera was mounted below the touch
surface, and optically captured the shadows that appeared on the translucent filter. (A
mirror in the housing was used to extend the optical path). The output of the camera was
digitized and fed into a signal processor for analysis.
Touch screens began being heavily commercialized at the beginning of the 1980s. HP (then
still formally known as Hewlett- Packard) tossed its hat in with the HP-150 in September of
1983. The computer used MS- DOS and featured a 9-inch Sony CRT surrounded by infrared
(IR) emitters and detectors that could sense where the user's finger came down on the screen.
The system cost about
$2,795, but it was not immediately embraced because it had some usability issues. For instance,
poking at the screen would in turn block other IR rays that could tell the computer where the
finger was pointing. This resulted in what some called "Gorilla Arm," referring to muscle
fatigue that came from a user sticking his or her hand out for so long.

The first multi touch screen was developed at Bell Labs in 1984. [Bill Buxton] reports that the
screen, created by Bob Boie, "used a transparent capacitive array of touch sensors overlaid on a
CRT." It allowed the user to ―manipulate graphical objects with fingers with excellent response
time‖. The discovery helped create the multi touch technology that we use today in tablets and
smart phones.

In 1984, Fujitsu released a touch pad for the Micro 16 to accommodate the complexity of
kanji characters, which were stored as tiled graphics. In 1985, Sega released the TerebiOekaki,
also known as the Sega Graphic Board, for the SG-1000video game console and SC-3000home
computer. It consisted of a plastic pen and a plastic board with a transparent window where pen
presses are detected.
1
1.2 1990’s: Touch screens for everyone!

Apple also launched a touch screen PDA device that year: the Newton PDA. Though the
Newton platform had begun in 1987, the Message Pad was the first in the series of devices
from Apple to use the platform. As Timenotes,Apple's CEO at the time, John Sculley, actually
coined the term "PDA" (or "personal digital assistant"). Like IBM's Simon Personal
Communicator, the Message Pad featured handwriting recognition software and was controlled
with stylus.

IGesture Pad: Westerman and his faculty advisor, John Elias, eventually formed a company
called Finger Works. The group began producing a line of multi touch gesture-based products,
including a gesture-based keyboard called the Touch Stream. This helped those who were
suffering from disabilities like repetitive strain injuries and other medical conditions. Finger
Works was eventually acquired by Apple in 2005, and many attribute technologies like the
multi touch Track pad or the iPhone‘s touch screen to this acquisition.

Figure1-Original Message pad 100

1.3 2000’s and beyond

With so many different technologies accumulating in the previous decades, the


2000s were the timefor touch screen technologies to really flourish. The 2000s were
also the era when touch screens became the favorite tool for design collaboration.

2
Figure 2-Gesture pad C.2000‘s and beyond

1.4 2001:Alias | Wavefront’s gesture-based Portfolio Wall

As the new millennium approached, companies were pouring more resources into
integrating touch screen technology into their daily processes. 3D animators and designers
were especially targeted with the advent of the Portfolio Wall. This was a large-format
touch screen meant to be a dynamic version of the boards that design studios use to track
projects.

Though development started in 1999, the Portfolio Wall was unveiled at SIGGRAPH in 2001
and was produced in part by a joint collaboration between General Motors and the team at
Alias|Wavefront.Buxton, who now serves as principal research at Microsoft Research, was the
chief scientist on the project. "We're tearing down people the wall and changing the way
effectively communicate in the workplace and do business," he said back then. "Portfolio
Wall's gestural interface allows users to completely interact with a digital asset. The Portfolio
Wall used a simple, easy-to-use, gesture-based interface. It allowed users to inspect and
images, animations, and 3D files with just their fingers.. It was also easy to scale images, fetch
3D models, and play back video.

1.5 2002: Mutual capacitive sensing in Sony’s Smart Skin

In 2002, Sony introduced a flat input surface that could recognize multiple hand positions and
touch points at the same time. The company called it Smart Skin. The technology worked by
calculating the distance between the hand and the surface with capacitive sensing and a mesh-
shaped antenna. Unlike the camera-based gesture recognition system in other technologies, the
sensing elements were all integrated into the touch surface. This also meant that it would not
malfunction in poor lighting conditions. The ultimate goal of the project was to transform
surfaces that are used every day, like your average table or a wall, into an interactive one with

3
the use of a PC nearby. However, the technology did more for capacitive touch technology
than may have been intended, including multiple contact points.

Figure 3-Alias|Wavefront‘s gesture-based Portfolio Wall

Jun Rekimoto at the Interaction Laboratory in Sony's Computer Science Laboratories


noted the advantages of this technology in a whitepaper. He said technologies like Smart
Skin offer "natural support for multiple-hand, multiple-user operations." More than two
users can simultaneously touch the surface at a time without any interference. Two
prototypes were developed to show the Smart Skin used as an interactive table and a
gesture-recognition pad.

The second prototype used Smart Skin used as an interactive table and a gesture-
recognition pad. The second prototype used finer mesh compared to the former so that it
can map out more precise coordinates of the fingers. Overall, the technology was meant to
offer a real-world feel of virtual objects, essentially recreating the human use with their
fingers to pick up objects and manipulate them.

Figure 4-Smart Skin sensed gestures

4
1.6 2002-2011: Failed tablets and Microsoft Research’s Touch Light

Multi touch technology struggled in the mainstream, appearing in specialty devices


but never quite catching a big break. One almost came in 2002, when Canada-based DSI
Datotech developed the HandGear + GRT device (the acronym "GRT" referred to
the device's Gesture Recognition Technology). The device's multipoint touchpad worked a
bit like the aforementioned iGesture pad in that it could recognize various gestures and
allow users to use it as an input device to control their computers. Hand Gear also enabled
users to "grab" three- dimensional objects in real-time, further extending that idea of
freedom and productivity in the design process. The company even made the API
available for developers via Auto Desk. Unfortunately, as Buxton mentions in his
overview of multi touch, the company ran out of money before their product shipped and
DSI closed its doors. Two years later, Andrew D. Wilson, an employee at Microsoft
Research, developed a gesture-based imaging touch screen and 3D display. The Touch
Light used a rear projection display to transform asheet of acrylic plastic into a surface that
was interactive. The display could sense multiple fingers and hands of more than one user,
and because of its 3D capabilities, it could also be used as a makeshift mirror. The Touch
Light was a neat technology demonstration, and it was eventually licensed out for
production to Eon Reality before the technology proved too expensive to be packaged into
a consumer device.

2006: Multi touch sensing through ―frustrated total internal reflection‖. In 2006, Jeff Han gave
the first public demonstration of his intuitive, interface-free, touch- screen at a TED
Conference in Monterey, CA. In his presentation, Han moved and manipulated photos on a
giant light box using only his fingertips. He flicked photos, stretched and pinched them away,
all with a captivating natural ease. "This is something Google should have in their lobby," he
joked. The demo showed that a high-resolution, scalable touch screen was possible to build
without spending too much money.

Han had discovered that the "robust" multi touch sensing was possible using "frustrated
total internal reflection" (FTIR), a technique from the biometrics community used for finger
print imaging. FTIR works by shining light through a piece of acrylic or plexil glass. The light
(infrareds‘ commonly used) bounces back and forth between the top and bottom of the acrylic
as it travels. When a finger touches down on the surface, the beams scatter around the edge

5
where the finger is placed, hence the term "frustrated." The images that are generated look like
white blobs and are picked up by an infrared camera. The computer analyzes where the finger
is touching to mark its placement and assign a coordinate. The software can then analyze the
coordinates to perform a certain task, like resize or rotate objects.

Figure 5- Touching to display


In 2007, the Microsoft Surface was essentially a computer embedded into a medium-sized
table, with a large, flat display on top. The screen's image was rear-projected onto the
display surface from within the table, and the system sensed where the user touched the
screen through cameras mounted inside the table looking upward toward the user. As
fingers and hands interacted with what's on screen, the Surface's software tracked the
touch points and triggered the correct actions. Later in its development cycle, Surface also
gained the ability to identify devices via RFID.

In 2011, Microsoft partnered up with manufacturers like Samsung to produce sleeker,


newer tabletop Surface hardware. For example, the Samsung SUR40 has a 40-inch 1080p
LED, and it drastically reduced the amount of internal space required for the touch sensing
mechanisms. At 22-inches thick, it was thinner than its predecessors, and the size
reduction made it possible to mount the display on a wall rather than requiring a table to
house the camera and sensors. It cost around $8,400 at the time of its launch and ran
Windows 7 and Surface 2.0 software.

6
Chapter 2

INTRODUCTION [2]

The touch less touch screen sounds like it would be nice and easy, however after closer
examination it looks like it could be quite a workout. This unique screen is made by TouchKo,
White Electronics Designs, and Groupe 3D. The screen resembles the Nintendo Wii without the
Wii Controller. With the Touchless touch screen your hand doesn‘t have to come in contact with
the screen at all, it works by detecting your hand movements in front of it. This is a pretty unique
and interesting invention, until you break out in a sweat. Now this technology doesn‘t compare
to the hologram-like IO2 Technologies Heliodisplay M3, but that‘s for anyone that has $18,100
laying around.
You probably won‘t see this screen in stores any time soon. Everybody loves a touch screen and
when you get a gadget with touch screen the experience is really exhilarating. When the I-phone
was introduced, everyone felt the same. But gradually, the exhilaration started fading. While
using the phone with the fingertip or with the stylus the screen started getting lots of finger prints
and scratches. When we use a screen protector; still dirty marks over such beautiful glossy screen
is a strict no-no. Same thing happens with I-pod touch. . Most of the time we have to wipe the
screen to get a better unobtrusive view of the screen.

Thanks to Elliptic Labs innovative technology that lets you control your gadgets like Computers,
MP3 players or mobile phones without touching them. Simply point your finger in the air
towards the device and move it accordingly to control the navigation in the device. They term
this as Touchless human/machine user interface for 3D navigation‖.

The touch screen is a modern technology that is growing the world in a drastic way. This unique
display is made by Touchko, White Electronics Designs and Groupe 3D. The screen resembles
with the cursor out is not necessary on the Touch Touch less screens. With this latest technology
your hand does not contact the screen because it works by detecting your hand movements in
front of it. This was a great invention of scientists. This Touch touch less phones costs 18,000.
Nowadays, everyone loves a touch screen and when you get a touch screen technology this
gadget experience is really so exciting. When i-phone was invented everyone was stimulating.
But, gradually began to fade because the tactile phones have the trip with the fingers or with the

7
screen of the needle began to get so many fingerprints and scratches on the screen. When we use
the screen saver, there will still be dirty marks on the screen. Most of the time we have to clean
the screen to get a better view of the screen. Elliptical Labs are the ones that let you control
yourgadgets like computers, MP3 players or mobile phones without touching them. You have to
point your finger in the air towards the device and you have to control the navigation of your
device.

A touchscreen, or touch screen, is an input device and normally layered on the top of
an electronic visual display of an information processing system. A user can give input or control
the information processing system through simple or multi-touch gestures by touching the screen
with a special stylus or one or more fingers. Some touchscreens use ordinary or specially coated
gloves to work while others may only work using a special stylus or pen. The user can use the
touchscreen to react to what is displayed and, if the software allows, to control how it is
displayed; for example, zooming to increase the text size.

The touchscreen enables the user to interact directly with what is displayed, rather than using
a mouse, touchpad, or other such devices (other than a stylus, which is optional for most modern
touchscreens).

Touchscreens are common in devices such as Nintendo game consoles, personal


computers, electronic voting machines, and point-of-sale (POS) systems. They can also be
attached to computers or, as terminals, to networks. They play a prominent role in the design of
digital appliances such as personal digital assistants (PDAs) and some e-readers.

The popularity of smartphones, tablets, and many types of information appliances is driving the
demand and acceptance of common touchscreens for portable and functional electronics.
Touchscreens are found in the medical field, heavy industry, automated teller machines (ATMs),
and kiosks such as museum displays or room automation, where keyboard and mouse systems do
not allow a suitably intuitive, rapid, or accurate interaction by the user with the display's content.

Historically, the touchscreen sensor and its accompanying controller-based firmware have been
made available by a wide array of after-market system integrators, and not by display, chip,
or motherboard manufacturers. Display manufacturers and chip manufacturers have

8
acknowledged the trend toward acceptance of touchscreens as a user interface component and
have begun to integrate touchscreens into the fundamental design of their products.

2.1.1 TOUCH LESS MONITOR:

Sure, everybody is doing touchscreen interfaces these days, but this is the first time I‘ve seen a
monitor that can respond to gestures without actually having to touch the screen.

The monitor, based on technology from TouchKo was recently demonstrated by White
Electronic Designs and Tactyl Services at the CeBIT show. Designed for applications where
touch may be difficult, such as for doctors who might be wearing surgical gloves, the display
features capacitive sensors that can read movements from up to 15cm away from the screen.
Software can then translate gestures into screen commands.

Touchscreen interfaces are great, but all that touching, like foreplay, can be a little bit of a drag.
Enter the wonder kids from Elliptic Labs, who are hard at work on implementing a touchless
interface. The input method is, well, in thin air. The technology detects motion in 3D and
requires no special worn-sensors for operation. By simply pointing at the screen, users can
manipulate the object being displayed in 3D. Details are light on how this actually functions, but
what we do know is this:

What is the technology behind it?

It obviously requires a sensor but the sensor is neither hand mounted nor present on the screen.
The sensor can be placed either on the table or near the screen. And the hardware setup is so
compact that it can be fitted into a tiny device like a MP3 player or a mobile phone. It recognizes
the position of an object from as 5 feet.

2.1.2 WORKING:

The system is capable of detecting movements in 3-dimensions without ever having to put your
fingers on the screen. Their patented touchless interface doesn‘t require that you wear any
special sensors on your hand either. You just point at the screen (from as far as 5 feet away), and
you can manipulate objects in 3D.
9
Figure 6-Touchless Monitor

Sensors are mounted around the screen that is being used, by interacting in the line-of-sight of
these sensors the motion is detected and interpreted into on-screen movements. What is to stop
unintentional gestures being used as input is not entirely clear, but it looks promising
nonetheless. The best part? Elliptic Labs says their technology will be easily small enough to be
implemented into cell phones and the like. IPod touchless, anyone?

2.1.3 Touch-less Gives Glimpse of GBUI:


We have seen the futuristic user interfaces of movies like Minority Report and the Matrix
Revolutions where people wave their hands in 3 dimensions and the computer understands what
the user wants and shifts and sorts data with precision. Microsoft's XD Huang demonstrated how
his company sees the future of the GUI at ITEXPO this past September in fact. But at the show,
the example was in 2 dimensions, not

2.1.4 The GBUI as seen in the Matrix


Microsoft's vision on the UI in their Redmond headquarters and it involves lots of gestures which
allow you to take applications and forward them on to others with simple hand movements. The
demos included the concept of software understanding business processes and helping you work.

10
So after reading a document - you could just push it off the side of your screen and the system
would know to post it on an intranet and also send a link to a specific group of people.

11
Chapter 3
Touch-less UI

The basic idea described in the patent is that there would be sensors arrayed around the perimeter
of the device capable of sensing finger movements in 3-D space. The user could use her fingers
similarly to a touch phone, but actually without having to touch the screen.

That's cool, isn't it? I think the idea is not only great, because user input will not be limited to 2-
D anymore, but that I can use my thick, dirty, bandaged, etc. fingers as well (as opposed to
"plain" touch UI). I'm a bit skeptic, though, how accurate it can be, whether the software will
have AI or the user will have to learn how to move her fingers. We'll see hopefully very soon!
Finally, there is one more thing to mention, it's the built-in accelerometer.

Figure 7- Touch-less UI Device

12
Figure 8- Touch-less UI Device

3.1 Touch-less SDK:

The Touchless SDK is an open source SDK for .NET applications. It enables developers to
create multi-touch based applications using a webcam for input. Color based

Markers defined by the user are tracked and their information is published through
Events to clients of the SDK. In a nutshell, the Touchless SDK enables touch without touching.
Well, Microsoft Office Labs has just released ―Touchless,‖ a webcam-driven multi-touch
interface SDK that enables ―touch without touching.‖

Using the SDK lets developers offer users ―a new and cheap way of experiencing multi-touch
capabilities, without the need of expensive hardware or software. All the user needs is a camera,‖
to track the multi-colored objects as defined by the developer. Just about any webcam will work
Touch-less demo:
The Touch less Demo is an open source application that anyone with a webcam can use to
experience multi-touch, no geekiness required. The demo was created using the Touch less SDK
and Windows Forms with C#. There are 4 fun demos: Snake - where you control a snake with a

13
marker, Defender - up to 4 player version of a pong-like game, Map - where you can rotate,
zoom, and move a map using 2 markers, and Draw the marker is used to guess what…. draw!

Mike demonstrated Touch less at a recent Office Labs‘ Productivity Science Fair where it was
voted by attendees as ―most interesting project.‖ If you wind up using the SDK, one would love
to hear what use you make of it!

3.2 Window-shop pic:

Figure 9- Window –Shop Pic

In addition, it is worth pointing out that you may need a few cameras in stereo to maximize
accuracy and you could theoretically use your hands as a mouse - meaning you can likely take
advantage of all the functions of the GBUI while resting your hand on the desk in front of you
for most of the day.

At some point we will see this stuff hit the OS and when that happens, the consumer
can decide if the mouse and keyboard will rule the future or the GBUI will be the killer tech of
the next decade.

14
Chapter 4

Touch wall

Touch Wall refers to the touch screen hardware setup itself; the corresponding software to run
Touch Wall, which is built on a standard version of Vista, is called Plex. Touch Wall and Plex
are superficially similar to Microsoft Surface, a multi-touch table computer that was introduced
in 2007 and which recently became commercially available in select AT&T stores. It is a
fundamentally simpler mechanical system, and is also significantly cheaper to produce. While
Surface retails at around $10,000, the hardware to ―turn almost anything into a multi-touch
interface‖ for Touch Wall is just ―hundreds of dollars‖.

Touch Wall consists of three infra red lasers that scan a surface. A camera notes when
something breaks through the laser line and feeds that information back to the Plex software.
Early prototypes, say Pratley and Sands, were made, simply, on a cardboard screen. A projector
was used to show the Plex interface on the cardboard, and a the system worked fine. Touch Wall
certainly isn‘t the first multi-touch product we‘ve seen (see iPhone). In addition to Surface, of
course, there are a number of early prototypes emerging in this space. But what Microsoft has
done with a few hundred dollars‘ worth of readily available hardware is stunning.
It‘s also clear that the only real limit on the screen size is the projector, meaning that entire walls
can easily be turned into a multi touch user interface. Scrap those white boards in the office, and
make every flat surface into a touch display instead. You might even save some money.
What‘s next??
Many personal computers will likely have similar screens in the near future. But touch interfaces
are nothing new -- witness ATM machines.

How about getting completely out of touch? A startup called LM3Labs says it's working with
major computer makers in Japan, Taiwan and the US to incorporate touch less navigation into
their laptops, Called Airstrike; the system uses tiny charge-coupled device (CCD) cameras
integrated into each side of the keyboard to detect user movements.

You can drag windows around or close them, for instance, by pointing and gesturing
in midair above the keyboard. You should be able to buy an Airstrike-equipped laptop next year,
with high-end stand-alone keyboards to follow. Any such system is unlikely to replace typing
15
and mousing. But that's not the point. Airstrike aims to give you an occasional quick break from
those activities.

16
Chapter 5
Technologies [3]

There are a variety of touchscreen technologies with different methods of sensing touch.

5.1 Resistive touchscreen

A resistive touchscreen panel comprises several thin layers, the most important of which are two
transparent electrically resistive layers facing each other with a thin gap between. The top layer
(that which is touched) has a coating on the underside surface; just beneath it is a similar
resistive layer on top of its substrate. One layer has conductive connections along its sides, the
other along top and bottom. A voltage is applied to one layer, and sensed by the other. When an
object, such as a fingertip or stylus tip, presses down onto the outer surface, the two layers touch
to become connected at that point. The panel then behaves as a pair of voltage dividers, one axis
at a time. By rapidly switching between each layer, the position of pressure on the screen can be
determined.

Resistive touch is used in restaurants, factories and hospitals due to its high tolerance for liquids
and contaminants. A major benefit of resistive-touch technology is its low cost. Additionally, as
only sufficient pressure is necessary for the touch to be sensed, they may be used with gloves on,
or by using anything rigid as a finger substitute. Disadvantages include the need to press down
and a risk of damage by sharp objects. Resistive touchscreens also suffer from poorer contrast,
due to having additional reflections (i.e.: glare) from the layers of materi-al placed over the
screen. This is the type of touchscreen used by Nintendo in the DS family, the 3DS family, and
the Wii U Gamepad.

5.2 Surface acoustic wave

Surface acoustic wave (SAW) technology uses ultrasonic waves that pass over the touchscreen
panel. When the panel is touched, a portion of the wave is absorbed. The change in ultrasonic
waves is processed by the controller to determine the position of the touch event. Surface
acoustic wave touchscreen panels can be damaged by outside elements. Contaminants on the
surface can also interfere with the functionality of the touchscreen.

17
Figure 10:- Capacitive

Figure 11:- Capacitive touchscreen of a mobile phone

The Casio TC500 Capacitive touch sensor watch from 1983, with angled light exposing the
touch sensor pads and traces etched onto the top watch glass surface. 18
5.3 Capacitive sensing

A capacitive touchscreen panel consists of an insulator, such as glass, coated with a


transparent conductor, such as indium tin oxide (ITO). As the human body is also an electrical
conductor, touching the surface of the screen results in a distortion of the
screen's electrostatic field, measurable as a change in capacitance. Different technologies may be
used to determine the location of the touch. The location is then sent to the controller for
processing.

Unlike a resistive touchscreen, one cannot use a capacitive touchscreen through most types of
electrically insulating material, such as gloves. This disadvantage especially affects usability in
consumer electronics, such as touch tablet PCs and capacitive smartphones in cold weather. It
can be overcome with a special capacitive stylus, or a special-application glove with an
embroidered patch of conductive thread allowing electrical contact with the user's fingertip.

Leading capacitive display manufacturers continue to develop thinner and more accurate
touchscreens. Those for mobile devices are now being produced with 'in-cell' technology, such as
in Samsung's Super AMOLED screens, that eliminates a layer by building the capacitors inside
the display itself. This type of touchscreen reduces the visible distance between the user's finger
and what the user is touching on the screen, creating a more-direct contact with the image of
displayed content and enabling taps and gestures to be more responsive.

A simple parallel-plate capacitor has two conductors separated by a dielectric layer. Most of the
energy in this system is concentrated directly between the plates. Some of the energy spills over
into the area outside the plates, and the electric field lines associated with this effect are called
fringing fields. Part of the challenge of making a practical capacitive sensor is to design a set of
printed circuit traces which direct fringing fields into an active sensing area accessible to a user.
A parallel-plate capacitor is not a good choice for such a sensor pattern. Placing a finger near
fringing electric fields adds conductive surface area to the capacitive system. The additional
charge storage capacity added by the finger is known as finger capacitance, or CF. The
capacitance of the sensor without a finger present is known as parasitic capacitance, or CP.

19
5.4 Surface capacitance

In this basic technology, only one side of the insulator is coated with a conductive layer. A small
voltage is applied to the layer, resulting in a uniform electrostatic field. When a conductor, such
as a human finger, touches the uncoated surface, a capacitor is dynamically formed. The sensor's
controller can determine the location of the touch indirectly from the change in the capacitance
as measured from the four corners of the panel.

As it has no moving parts, it is moderately durable but has limited resolution, is prone to false
signals from parasitic capacitive coupling, and needs calibration during manufacture. It is
therefore most often used in simple applications such as industrial controls and kiosks.

Figure 12: - Back side of a Multitouch Globe, based on projected capacitive touch (PCT)
technology

20
Figure 13: - 8 x 8 projected capacitance touchscreen manufactured using 25 micron insulation
coated copper wire embedded in a clear polyester film.

Figure 14:- Schema of projected-capacitive touchscreen

Projected capacitive touch (PCT; also PCAP) technology is a variant of capacitive touch
technology.

This technology was first developed by Ronald and Malcolm Binstead in 1984, when a simple 16
key capacitive touchpad was invented which could sense a finger through very thick glass, even
though the signal to be sensed was significantly smaller than the capacitance changes caused by
varying environmental factors such as humidity, dirt, rain and temperature.

21
Accurate sensing was achieved due to:

1) the slow but continuous updating of the ―no-touch‖ reference value for each key, eliminating
medium to long term problems associated with dirt and ageing.

2) the change in value for each key being compared with the relative change in value for each of
the other keys, to see if the pattern of change corresponded to the change that would be expected
to be caused by a nearby finger, as opposed to localized heating, rain, or other environmental
factors.

In 1989 the 16 discrete copper foil keys of the keypad were replaced with 16 transparent Indium
Tin Oxide keys, creating a clear keypad / touch screen that could sense through thick glass.

A simple to manufacture, multiple input, x/y multiplexed version of this touch screen was
invented in 1994. This could use Indium Tin Oxide, or 10 to 25 micron diameter insulation
coated copper wires as the sensing elements. The first version enabled 64 touch positions to be
detected with just 16 inputs. (See image on the right).

Due to their low cost and ability to survive in hostile environments, 7000 of these were used by
JPM International in their 'Monopoly' and 'Cluedo' pub gaming machines.

In 1997 much higher touch resolution was enabled by the introduction of more (16 + 16) inputs
and interpolating between the 256 touch positions where these sensing elements intersected.

In 1999, the ability of this technology to sense through thick non-conductive materials was
publicly termed ‗Projective Capacitive‘ for the first time.

Later this term was modified by Zytronic Displays to ‗Projected Capacitance‘.

Some modern PCT touch screens are composed of thousands of discrete keys, but most PCT
touch screens are made of a matrix of rows and columns of conductive material, layered on
sheets of glass. This can be done either by etching a single conductive layer to form a grid
pattern of electrodes, or by etching two separate, perpendicular layers of conductive material
with parallel lines or tracks to form a grid. In some designs, voltage applied to this grid creates a
uniform electrostatic field, which can be measured. When a conductive object, such as a finger,
comes into contact with a PCT panel, it distorts the local electrostatic field at that point. This is
measurable as a change in capacitance. If a finger bridges the gap between two of the "tracks",
the charge field is further interrupted and detected by the controller. The capacitance can be

22
changed and measured at every individual point on the grid. This system is able to accurately
track touches.

Due to the top layer of a PCT being glass, it is sturdier than less-expensive resistive touch
technology. Unlike traditional capacitive touch technology, it is possible for a PCT system to
sense a passive stylus or gloved finger. However, moisture on the surface of the panel, high
humidity, or collected dust can interfere with performance. These environmental factors,
however, are not a problem with 'fine wire' based touchscreens due to the fact that wire based
touchscreens have a much lower 'parasitic' capacitance, and there is greater distance between
neighboring conductors.

There are two types of PCT: mutual capacitance and self-capacitance.

5.5 Mutual capacitance

This is a common PCT approach, which makes use of the fact that most conductive objects are
able to hold a charge if they are very close together. In mutual capacitive sensors, a capacitor is
inherently formed by the row trace and column trace at each intersection of the grid. A 16×14
array, for example, would have 224 independent capacitors. A voltage is applied to the rows or
columns. Bringing a finger or conductive stylus close to the surface of the sensor changes the
local electrostatic field, which in turn reduces the mutual capacitance. The capacitance change at
every individual point on the grid can be measured to accurately determine the touch location by
measuring the voltage in the other axis. Mutual capacitance allows multi-touch operation where
multiple fingers, palms or styli can be accurately tracked at the same time.

5.6 Self-capacitance

Self-capacitance sensors can have the same X-Y grid as mutual capacitance sensors, but the
columns and rows operate independently. With self-capacitance, the capacitive load of a finger is
measured on each column or row electrode by a current meter, or the change in frequency of an
RC oscillator. This RC method produces a stronger signal than mutual capacitance, but, until
recently, it was unable to resolve the positions of more than one finger without ambiguity, due to

23
"ghosting" or misplaced location sensing. However, in 2010 a new method of sensing was
patented [which allows some parts of a capacitance sensor to be sensitive to touch while other
parts are insensitive. The patent author discovered that a Self Capacitance sensing conductor (x),
in an x/y array of conductors, cannot detect the close proximity of a finger if the (x) conductor is
crossed by a conductor (y) whose waveform is 180 degrees out of phase with its own sensing
waveform. This makes it possible to mask off ambiguous (possible ghost) intersections to
confirm if a finger is close to that intersection or not. For the first time this enabled Self
capacitance to be used for multi-touch without "ghosting‖.

5.6.1 Use of styli on capacitive screens

Capacitive touchscreens do not necessarily need to be operated by a finger, but until recently the
special styli required could be quite expensive to purchase. The cost of this technology has fallen
greatly in recent years and capacitive styli are now widely available for a nominal charge, and
often given away free with mobile accessories. These consist of an electrically conductive shaft
with a soft conductive rubber tip, thereby resistively connecting the fingers to the tip of the
stylus.

Figure 15:- Infrared grid

24
Infrared sensors mounted around the display watch for a user's touchscreen input on this PLATO
V terminal in 1981. The monochromatic plasma display's characteristic orange glow is
illustrated.

An infrared touchscreen uses an array of X-Y infrared LED and photo detector pairs around the
edges of the screen to detect a disruption in the pattern of LED beams. These LED beams cross
each other in vertical and horizontal patterns. This helps the sensors pick up the exact location of
the touch. A major benefit of such a system is that it can detect essentially any opaque object
including a finger, gloved finger, stylus or pen. It is generally used in outdoor applications and
POS systems which cannot rely on a conductor (such as a bare finger) to activate the
touchscreen. Unlike capacitive touchscreens, infrared touchscreens do not require any patterning
on the glass which increases durability and optical clarity of the overall system. Infrared
touchscreens are sensitive to dirt and dust that can interfere with the infrared beams, and suffer
from parallax in curved surfaces and accidental press when the user hovers a finger over the
screen while searching for the item to be selected.

5.7 Infrared acrylic projection

A translucent acrylic sheet is used as a rear-projection screen to display information. The edges
of the acrylic sheet are illuminated by infrared LEDs, and infrared cameras are focused on the
back of the sheet. Objects placed on the sheet are detectable by the cameras. When the sheet is
touched by the user, the deformation results in leakage of infrared light which peaks at the points
of maximum pressure, indicating the user's touch location. Microsoft's Pixel Sense tablets use
this technology.

5.8 Optical imaging

Optical touchscreens are a relatively modern development in touchscreen technology, in which


two or more image sensors are placed around the edges (mostly the corners) of the screen.
Infrared backlights are placed in the camera's field of view on the opposite side of the screen. A
touch blocks some lights from the cameras, and the location and size of the touching object can

25
be calculated (see visual hull). This technology is growing in popularity due to its scalability,
versatility, and affordability for larger touchscreens.

5.9 Dispersive signal technology

Introduced in 2002 by 3M, this system detects a touch by using sensors to measure
the piezoelectricity in the glass. Complex algorithms interpret this information and provide the
actual location of the touch. The technology is unaffected by dust and other outside elements,
including scratches. Since there is no need for additional elements on screen, it also claims to
provide excellent optical clarity. Any object can be used to generate touch events, including
gloved fingers. A downside is that after the initial touch, the system cannot detect a motionless
finger. However, for the same reason, resting objects do not disrupt touch recognition.

5.10 Acoustic pulse recognition

The key to this technology is that a touch at any one position on the surface generates a sound
wave in the substrate which then produces a unique combined signal as measured by three or
more tiny transducers attached to the edges of the touchscreen. The digitized signal is compared
to a list corresponding to every position on the surface, determining the touch location. A
moving touch is tracked by rapid repetition of this process. Extraneous and ambient sounds are
ignored since they do not match any stored sound profile. The technology differs from other
sound-based technologies by using a simple look-up method rather than expensive signal-
processing hardware. As with the dispersive signal technology system, a motionless finger
cannot be detected after the initial touch. However, for the same reason, the touch recognition is
not disrupted by any resting objects. The technology was created by Sound Touch Ltd in the
early 2000s, as described by the patent family EP1852772, and introduced to the market by Tyco
International's Elo division in 2006 as Acoustic Pulse Recognition. The touchscreen used by Elo
is made of ordinary glass, giving good durability and optical clarity. The technology usually
retains accuracy with scratches and dust on the screen. The technology is also well suited to
displays that are physically larger.

26
Chapter 6
Ergonomics and usage[4]

6.1 Touchscreen accuracy

For touchscreens to be effective input devices, users must be able to accurately select targets and
avoid accidental selection of adjacent targets. The design of touchscreen interfaces should reflect
technical capabilities of the system, ergonomics, cognitive psychology and human physiology.

Guidelines for touchscreen designs were first developed in the 1990s, based on early research
and actual use of older systems, typically using infrared grids—which were highly dependent on
the size of the user's fingers. These guidelines are less relevant for the bulk of modern devices
which use capacitive or resistive touch technology.

From the mid-2000s, makers of operating systems for smartphones have promulgated standards,
but these vary between manufacturers, and allow for significant variation in size based on
technology changes, so are unsuitable from a human factors perspective.

Much more important is the accuracy humans have in selecting targets with their finger or a pen
stylus. The accuracy of user selection varies by position on the screen: users are most accurate at
the center, less so at the left and right edges, and least accurate at the top edge and especially the
bottom edge. The R95 accuracy (required radius for 95% target accuracy) varies from 7 mm
(0.28 in) in the center to 12 mm (0.47 in) in the lower corners. Users are subconsciously aware of
this, and take more time to select targets which are smaller or at the edges or corners of the
touchscreen.

This user inaccuracy is a result of parallax, visual acuity and the speed of the feedback loop
between the eyes and fingers. The precision of the human finger alone is much, much higher than
this, so when assistive technologies are provided—such as on-screen magnifiers—users can
move their finger (once in contact with the screen) with precision as small as 0.1 mm (0.004 in).

6.2 Hand position, digit used and switching

27
Users of handheld and portable touchscreen devices hold them in a variety of ways, and routinely
change their method of holding and selection to suit the position and type of input. There are four
basic types of handheld interaction:

 Holding at least in part with both hands, tapping with a single thumb
 Holding with two hands and tapping with both thumbs
 Holding with one hand, tapping with the finger (or rarely, thumb) of another hand
 Holding the device in one hand, and tapping with the thumb from that same hand

Use rates vary widely. While two-thumb tapping is encountered rarely (1–3%) for many general
interactions, it is used for 41% of typing interaction.

In addition, devices are often placed on surfaces (desks or tables) and tablets especially are used
in stands. The user may point, select or gesture in these cases with their finger or thumb, and
vary use of these methods.

6.3 Combined with haptics

Touchscreens are often used with haptic response systems. A common example of this
technology is the vibratory feedback provided when a button on the touchscreen is tapped.
Haptics are used to improve the user's experience with touchscreens by providing simulated
tactile feedback, and can be designed to react immediately, partly countering on-screen response
latency. Research from the University of Glasgow (Brewster, Chohan, and Brown, 2007; and
more recently Hogan) demonstrates that touchscreen users reduce input errors (by 20%), increase
input speed (by 20%), and lower their cognitive load (by 40%) when touchscreens are combined
with haptics or tactile feedback.

6.4 Gorilla arm

Extended use of gestural interfaces without the ability of the user to rest their arm is referred to
as "gorilla arm". It can result in fatigue, and even repetitive stress injury when routinely used in a
work setting. Certain early pen-based interfaces required the operator to work in this position for

28
much of the work day. Allowing the user to rest their hand or arm on the input device or a frame
around it is a solution for this in many contexts. This phenomenon is often cited as a prima
facie example of movements to be minimized by proper ergonomic design.

Unsupported touchscreens are still fairly common in applications such as ATMs and data kiosks,
but are not an issue as the typical user only engages for brief and widely spaced periods.

6.5 Fingerprints

Figure 16:- Fingerprints and smudges on a iPad (tablet computer) touchscreen

Touchscreens can suffer from the problem of fingerprints on the display. This can be mitigated
by the use of materials with optical coatings designed to reduce the visible effects of fingerprint
oils. Most modern smartphones have oleophobic coatings, which lessen the amount of oil
residue. Another option is to install a matte-finish anti-glare screen protector, which creates a
slightly roughened surface that does not easily retain smudges.

6.6 ADVANTAGES [5]

-Touchscreen devices have limited buttons that will possibly break after 3 - 4 years

- Touch screen devices generally have simpler user interfaces. Ipod Applications

29
- Having less or no buttons means you can put more effort into having a large screen

-For people concerned about hygiene, most of the devices are easy to clean, some are even dirt,
dust and grease resistant

-For new or uncomfortable people with normal desktops, touch screens are easy to use, helping
more people become accustomed to using computers

6.7 DISADVANTAGES

-The screen has to be large enough to be able to touch the buttons without losing

- Having a large bright screen and the need for massive computing power to run this means a
very low battery life.

-In direct sunlight it is much less efficient and most of the time very difficult to read the screen.

-If a touch screen device locks the entire screen would be insensitive, and because of the lack of
revering buttons would be very difficult.

30
Chapter 7

CONCLUSION
Today‘s thoughts are again around user interface. Efforts are being put to better the
technology day-in and day-out. The Touchless touch screen user interface can be used effectively
in computers, cell phones, webcams and laptops. May be few years down the line, our body can
be transformed into a virtual mouse, virtual keyboard and what not??, Our body may be turned in
to an input device!

31
Chapter 8

REFERENCES
[1] Touch less Touch Screen International Conference on Advanced Computing (ICAC- 2016)
College of Computing Sciences and Information Technology (CCSIT) ,Teerthanker Mahaveer
University , Moradabad -2016

[2] Touch less Touch Screen User Interface‖ International Journal of Technical Research and
Applications e-ISSN: 2320-8163, Issue 43
(March 2017), PP. 59-63

[3] http://www.studymafia.org/

[4]http://www.etre.com/blog/2008/02/elliptic_labs_touchless_user_interface/

[5]: https://www.homestratosphere.com/touchless-products-for-the-home/

32

You might also like