Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

Third eye for blind people

*Note: Sub-titles are not captured in Xplore and should not be used

Vaibhav Aditya Kumar Anjali Satija


Computational Intelligence Computational Intelligence Computational Intelligence
SRMIST SRMIST SRMIST
Patna, Bihar, India Hajipur, Bihar, India
vr4453@srmist.edu.in ak6316@srmist.edu.in line 5: email address or ORCID

XXX-X-XXXX-XXXX-X/XX/$XX.00 ©20XX IEEE


Abstract— Basically it is difficult for blind people pass theirday to is made among the Blind people finding difficulties in detecting
day life with their disabilities. To make their stick smarter, we obstacles during walking in the street .Our project mainly focuses on
interfaced some system with their walking stick.In this system we the visually impaired people who cannot walk independently in
interfaced some smart functions with their stick. Whenever the unfamiliar environment .The main aim of our project is to develop a
obstacle is detected on the way through ultra-sonic sensor placed system that helps the blind peopleto move independently. Smart Stick
on the stick, Camera gets triggeredto capture the object which is for Blind systems usually consists of three parts to help people travel
on the way. The captured image is sent to processor to identify the with a greater degree of psychological comfort and independence:
type of the object and then it is intimated as voice command sensing the immediate environment for obstacles and hazards,
through speaker or via earphones connected with Raspberry pi.
So that blind can able to identify the object in-front of them, if it
providing information to move left or right and orientation during
is identified that is a human in their way, they can ask for any help. travel.
If there is a large obstacle like a car, they can be able to walk based “Navigation Tool for Visually Challenged using
on the object in-front of them. We are also making open-source
Microcontroller”, Sabarish.S.
audio book software so you can build a reader with very simple
raspberry pi controls. Here we present to you the Brick Pi Book “Smart walking stick - an electronic approach to assist visually
reader which can read aloud a real book. As a concern of the disabled persons”, Mohammad Hazzaz Mahmud,
authenticity of the person he can also identify the user in front of
him by using Face Recognition which can differentiate between a “Ultrasonic smart cane indicating a safe free path to blind
known person and unknown person. We can also add Reading people”, Arun G. Gaikwad 1, H. K. Waghmare2 1ME Embedded
Text from Video Streaming. system Design, MIT Aurangabad2 Assistant Professor Department of
Keywords— Raspberrypi, IR sensor, Raspberry Cam, speaker E&TC, MIT Aurangabad
“A Multidimensional Walking Aid for Visually Impaired Using
I. INTRODUCTION Ultrasonic Sensors Network with Voice Guidance”
As derived from ―World Health Organization reportand
fact sheet updated on October 2017 on visual impairment, the
estimated number of people live with vision impairment is
about 253 million; 36 million are totally blind while 217
million suffer from moderate to severe vision impairment. III. METHODOLOGY
Globally, the main cause of vision loss is the chronic eye
diseases while the top two causes of visualimpairment are in-
corrected refractive errors and un- operated cataract. In this
fast-moving world, visually impaired people are left behind
and not treated equally. To help them and provide them with
some level of comfort, many solutions and techniques have
been tried and developed. One of these techniques is called
orientation and mobility. In this technique, a specialist helps
the visually impaired and blind people and trains them to
move on their own. They are trained to depend on their other
remaining senses to move independently and safely. In order
to support blind and visually impaired people ‘s mobility
indoor and outdoor, this work proposes a simple electronic
guidance embedded vision system which is configurable and
efficient.The system utilizes two types of devices including IR
sensor, and camera. A raspberry pi 4 model B microcontroller
processes the reflected signals from all devices in order to
classify an obstacle. The proposed guidance system is able to A. Raspberry pi
determine the obstacle distance, in addition to material and
shape characteristics of the obstacle. Furthermore, the
system can name some of the detected objects and also it
can read out loud eBooks for blind people who cannot read
books. Moreover, neither cane nor another marked tool are
needed to be carried by the user. Like other systems, the
proposed system can be fastening to a hat or to a pen-sized
hand mini stick. It has high immunity to both ambient light
and object ‘s color. We imply image processing to do object
detection using ML techniques.
RaspberryPi is as shown in Fig. It is B model and Linux-based
microcontroller. It acts as minicomputer that connects all
II. EASE OF USE peripherals used by computer like keyboard, TV or monitor,
mouse, SD card slot for loading operating, Ethernet port for LAN
A. Literature Survey cable connection, 4 USB ports for connecting I/O devices, HDMI
A literature survey is a proof essay of sorts. It is a study of port for connecting monitor or HD TVs, memory, power source,
relevant literature materials in relation to a topic we have been memory, video/audio outputs and camera interface (CSI). The
given. For thorough development of the device Smart Stick for RaspberryPi operating system can also be access by remote login
Blind Using Raspberry Pi, we need to go through each and every through PC screen, with LAN cable. RaspberryPi 2 Model B
technical aspect related to it. This chapter provides an boards has 40 I/O pins with 2.54mm which is marked as P1 and
introduction to the area of research. A Brief Study and Survey arranged in 2x20 strips includes UART, I2C, SPI, +3.3V, +5V and
has been Carried out to understand various issues related to the GND supply. It uses 900MHz quad-core Broadcom BCM2835
project which involves providing a smart electronic aid for which is of ARM cortex A7 family. It has 1GB of built in RAM.
blind people to provide artificial vision and object detection, real Linux based operating system is loaded into SD card which
time assistance viaGPS module by using Raspberry Pi .A survey
includes several steps to load operating system into micro SD
card and then plug into SD card slot. Ubuntu or any linux-
based operating system can be used, Rasp bian- Jessie is the
operating system used in this system which can be directly
downloaded from the official RaspberryPi website.
GPIO pin diagram in RaspberryPi 2 advanced Model B board
is shown in Fig.3.2. The first 26 pin are same as
RaspberryPi 1 A/B boards. Additional 14 pins provide
B. Object Detection:
Object detection is a technology that falls under the
broader domain of Computer Vision. It deals with
identifying and tracking objects present in images and
videos. Object detection has multiple applications such
as face detection, vehicle detection, pedestrian counting,
self-driving cars, security systems, etc. The two major
objectives of object detection include: To identify all
objects present in an image Filter out the object of
attention. In this project, we will use object detection in
Python with the help of the ImageAI i.e. deep learning
techniques.

The Captured image is converted into text

C. Implementation:
4.1 Blind Stick
A system architecture is the conceptual model of the project that
defines the structure, behavior and functionalities. This
section focuses on the system architecture and explains about
the functionality of each segment of the system. It also
expresses how sensors communicate with each other and
work together to give the desired output. The system
architecture is shown
4.2 System Architecture:
the flowchart for the Third Eye system. The flowchart shows the
sequence of steps and decisions required to perform the Third
Eye system process. The camera is considered the eye of this
system. It continuously captures still images which are sent
to the raspberry pi microcontroller.

C. For OCR
OCR (optical character recognition) is the use of
technology to distinguish printed or handwritten text
characters inside digital images of physical documents, The raspberry pi microcontroller takes those images, processes
suchas a scanned paper document. The basic process of them using artificial intelligence image processingalgorithms
OCR involves examining the text of a document and and generates a matching response which is sent to text to
translating the characters into code that can be used for speech module. OpenCV framework is used to develop the
data processing. OCR is sometimes also referred to as AI model which gives Cafee model for the raspberry pi.
text recognition.OCR systems are made up of a Python program is used to get the input through camera and
combination of hardware and software that is used to handle the interaction between the model and the input. Then,
convert physical documents into machine-readable it shows the output on the screen as a text. Text to speech
textIn order to avoid over lapping of features we use module takes the text as an input and converts it to an audio
push buttons to specify different modes or to activate output which goes directly into the user‘s ears. Infrared
each piece of code. sensors are placed in a way to matchthe angle of vision in a
human. They are used to calculatethe distance between the
user and an object located at a far distance. Sonar sensors are
placed at an angle of 30 degrees from the vertical plane
facing downwards. This inclinationof 30 degrees helps sonar
sensors to scan for the objects placed onground near the user.
Hence, they can be used to calculate the distance from The raspberry pi microcontroller takes those images,
an object lying on the ground adjacent to the legs of the processes them using artificial intelligence image processing
user. Both of these sensors send the data to raspberry pi algorithms and generates a matching response which is sent to
which in turn processes the data and sends the desired text to speech module. OpenCV framework is used to develop
output to the text to speech module. This again takes the AI model which gives Cafee model for theraspberry pi.
text as an input and converts it to an audioresponse Python program is used to get the input through camera and
which reaches the user through headset. This full handle the interaction between the model and the input. Then,
process keeps going simultaneously and continuously it shows the output on the screen as a text. Text to speech
module takes the text as an input and converts it to an audio
until the user decides to switch off the device.
output which goes directly into the user‘s ears. Infrared
sensors are placed in a way to matchthe angle of vision in a
4.3 Flow of Process illustrates the flow of process of the human. They are used to calculatethe distance between the
proposed method: user and an object located at a far distance. Sonar sensors are
placed at an angle of 30 degrees from the vertical plane facing
downwards. This inclinationof 30 degrees helps sonar sensors
to scan for the objects placed onground near the user. Hence,
they can be used to calculate the distance from an object lying
on the ground adjacent to the legs of the user. Both of these
sensors send the data to raspberry pi which in turn processes
the data and sends the desired output to the text to speech
module. This again takes text as an input and converts it
to an audioresponse which reaches the user through headset.
This full process keeps going simultaneously and
continuously until the user decides to switch off the device.

1.1 FLOW OF PROCESS ILLUSTRATES THE FLOW OF PROCESS


OF THEPROPOSED METHOD.

§ Optical Scanning: The optical Scanning


process involves capturing a digital image of
the original document. The OCR optical
scanners that are used will convert the light
intensity into gray-levels. This process is
called as Thresholding. Thresholding converts
a multilevel image into a bi- level image of
black and white.
§ Location and Segmentation: Segmentation
involves isolation of characters or words.
Location of thetext is done via pixels with x
and y coordinates.
Pre-processing: The resulting image from theprocess
of scanning may contain some amount of noise. Fig.4.7. Flow of Process
Depending on the resolution of the scanner the characters
may be broken or smeared. Hence pre-processing can be § Optical Scanning: The optical Scanning process
done to smooth the digitized involves capturing a digital image of the original
document. The OCR optical scanners that are used
will convert the light intensity into gray-levels. This
process is called as Thresholding. Thresholding
converts a multilevel image into a bi-level image of
black and white.
§ Location and Segmentation: Segmentation involves
isolation of characters or words. Location of the
text is done via pixels with x and y coordinates.
§ Pre-processing: The resulting image from the
process of scanning may contain some amount of
noise. Depending on the resolution of the scanner
the characters may be broken or smeared. Hence
pre-processing can be done to smooth the digitized
Engineering, Jain University, Dr. Narayana Swamy R, Head
§ character. In addition to smoothing pre- of the department, Computer Science & Engineering, for their
processingalso involves normalization of constant encouragement and expert advice. Prof. Harish Naik
the characters. B M, Project Coordinator and all the staff members of
Computer Science & Engineering for their support. And all
our colleagues, family, and friends who have directly or
§ Feature Extraction: This technique is used for indirectly supported this work.
capturing the essential characteristics of the
REFERENCES
symbols. Feature extraction is done by matching
the matrix containing the input character with a set [1] Smart walking stick by Mohammed H. Rana and
of prototype characters that represent each Sayemil (2013)
possibleclass. [2] The electronic travelling aid for blind navigation and
§ Recognition: The recognition is the process of monitoring by Mohan M.S Madulika (2013)
identifying each character and assigning it to the [3] Haptic shoe for the blind
correct character class.
[4] Multi-dimensional walking aid by Olakanmi. O.
Oladayo (2014):
D. Conclusions and future Scope
[5]3D ultrasonic stick for the blind by the Osama Bader
In today‘s world, disability of any kind for any person Al- Barm (2014):[7] Harrison, H.
canbe hard and it is the same case with blindness. Blind people
are generally left underprivileged. It is very difficult to givea [6] Benko, and A. D. Wilson, “Omni Touch: Wearable
vision to a blind person. In this paper, a new AI based system multi touch interaction everywhere”, in Proc. ACM UIST,
called ―Navigation System for Blind - Third Eye to control 2011, pp. 441-450.
the navigation of a blind person has been proposed and [7] Benko and A. Wilson, “Depth Touch: Using depth-
developed. This AI based system offers a simple electronic sensing camera to enable freehand interactions on and above
guidance embedded vision system which is configurable and the inter- related surface”, in Proc. IEEE Workshop ITS, vol.
efficient. The system helps blind and visually impaired people 8, 2009 [8]Mo, J. P. Lewis, and U. Neumann, “Smart Canvas:
to be highly self-dependent by assisting their mobility A gesture driven intelligent drawing desk system”, in Proc.
regardless of where they are; outdooror indoor. Results show ACM IUI, 2009, pp. 239-243
that all the sensors work properly and give accurate readings,
though the range of the prototype sensors is not high. Object
detection algorithm utilizes 100% CPU which makes the
raspberry pi hot and thus, in future system, two raspberry pi
are recommended to be used; one for object detection and one IEEE conference templates contain guidance text for
for all sensors. composing and formatting conference papers. Please
ensure that all template text is removed from your
ACKNOWLEDGMENT conference paper prior to submission to the
Authors are very much thankful to Dr. Hariprasad S A, conference. Failure to remove template text from
Director, Faculty of Engineering & Technology, Dr Kuldeep your paper may result in your paper not being
Sharma, Dean Department of Computer Science and published.

You might also like